paper_name stringlengths 11 170 | text stringlengths 8.07k 307k | summary stringlengths 152 6.16k | paper_id stringlengths 43 43 |
|---|---|---|---|
Benign Overfitting in Adversarially Robust Linear Classification | 1 INTRODUCTION . Modern machine learning methods such as deep learning have made many breakthroughs in a variety of application domains , including image classification ( He et al. , 2016 ; Krizhevsky et al. , 2012 ) , speech recognition ( Hinton et al. , 2012 ) and etc . These models are typically over-parameterized : the number of model parameters far exceeds the size of the training samples . One mystery is that , these over-parameterized models can memorize noisy training data and yet still achieve quite good generalization performances on the test data ( Zhang et al. , 2017 ) . Many efforts have been made to explain this striking phenomenon , which against what the classical notion of overfitting might suggest . A line of research works ( Soudry et al. , 2018 ; Ji & Telgarsky , 2019b ; Nacson et al. , 2019 ; Gunasekar et al. , 2018b ; a ) shows that there exists the so-called implicit bias ( Neyshabur , 2017 ) : the training algorithms tend to converge to certain kinds of solutions even with no explicit regularization . Specifically , Soudry et al . ( 2018 ) ; Ji & Telgarsky ( 2019b ) ; Nacson et al . ( 2019 ) demonstrate that gradient descent trained linear classifiers on logistic or exponential loss with no regularization asymptotically converge to the maximum L2 margin classifier . Recent works ( Bartlett et al. , 2020 ; Chatterji & Long , 2020 ; Cao et al. , 2021 ; Wang & Thrampoulidis , 2021 ; Tsigler & Bartlett , 2020 ) further shows that over-parameterized and implicitly regularized interpolators can indeed achieve small test error , and formulate this phenomenon as “ benign overfitting ” . More concretely , suppose the classification model f is parameterized by θ ∈ Θ and the loss is denoted as ` ( · ) . The population risk is define as P ( x , y ) ∼D [ fθ ( x ) 6= y ] , where data pair ( x , y ) is generated from certain data generation model . Chatterji & Long ( 2020 ) shows that with sufficient over-parameterization , gradient descent trained maximum L2 margin classifier can achieve nearly optimal population risk on noisy data for data generated from a sub-Gaussian mixture model . This suggests that the overfitting can be “ benign ” in the overparameterized setting . Besides these studies on the benign overfitting phenomenon , another well-known feature of modern machine learning methods is that they are vulnerable to adversarial examples . Recent studies ( Szegedy et al. , 2013 ; Goodfellow et al. , 2015 ) show that modern machine learning systems are brittle : slight input perturbation that is imperceptible to human eyes could mislead a well-trained classifier into wrong classification result . These malicious inputs are also known as the adversarial examples ( Szegedy et al. , 2013 ; Goodfellow et al. , 2015 ) . Adversarial examples raise severe trustworthy issues and security concerns on the current machine learning systems especially in securitycritical applications . Various methods ( Kurakin et al. , 2016 ; Madry et al. , 2018 ; Zhang et al. , 2019 ; Wang et al. , 2019 ; 2020 ) have been proposed to defend against the threats posed by adversarial examples . One of the notable approaches is adversarial training ( Madry et al. , 2018 ) . Specifically , adversarial training solves the following min-max optimization problem , min θ∈Θ 1 n n∑ i=1 max x′i∈B p ( xi ) ` ( fθ ( x ′ i ) , yi ) , where { ( xi , yi ) } ni=1 is the training set and Bp ( xi ) = { x : ‖x−xi‖p ≤ } denotes the -ball around xi in ` p norm ( p ≥ 1 ) . Many empirical or theoretical studies have been conducted trying to analyze or further improve adversarial training robustness ( Zhang et al. , 2019 ; Rice et al. , 2020 ; Wang et al. , 2020 ; Carmon et al. , 2019 ; Wang et al. , 2019 ; Raghunathan et al. , 2020 ) . A recent work ( Sanyal et al. , 2021 ) also pointed out that normally trained interpolators with the presence of label noise are unlikely to be adversarially robust , while adversarially robust classifiers can not overfit noisy labels under certain conditions . However , it is still not clear whether the benign overfitting phenomenon occurs for extremely over-parameterized models in the presence of adversarial examples . In this paper , we show that benign overfitting indeed occurs in adversarial training . In order to properly characterize the benign overfitting phenomenon on adversarial training , we also define the population adversarial risk , which is the counterpart for population risk in standard training scenario : P ( x , y ) ∼D [ ∃x′ ∈ Bp ( x ) s.t. , fθ ( x′ ) 6= y ] . The adversarial risk measures the misclassification rate of the target classifier under the presence of ` p-norm adversarial perturbations . It is easy to observe that the adversarial risk is always larger than standard risk as it requires the classifier to correctly classify the data examples within the entire local ` p norm ball . We summarize our contributions of this paper in the following • We show that the benign overfitting phenomenon can occur in adversarially robust linear classifiers with sufficient over-parameterization . Specifically , under moderate ` p norm perturbations , adversarially trained linear classifiers can achieve the near-optimal standard and adversarial risks , in spite of overfitting the noisy training data . • When the perturbation strength is set to be 0 , our adversarial risk bound reduces to the standard one . The resulting standard risk bound extends Chatterji & Long ( 2020 ) ’ s risk bound to further characterize the behavior of the linear classifier trained by t-step gradient descent . • We show that depending on the value of p ( perturbation norm ) , the adversarial risk bound can be different . The higher value of p ( typically for p ≥ 2 case ) actually leads to a larger gap between the adversarial risk and the standard risk with the same . Notation . we use lower case letters to denote scalars and lower case bold face letters to denote vectors . For a vector x ∈ Rd , we denote its ` p norm ( p ≥ 1 ) of x by ‖x‖p = ( ∑d i=1 |xi|p ) 1/p , the ` ∞ norm of x by ‖x‖∞ = maxdi=1 |xi| . We denote x◦p as the element-wise p-power of x . For p ≥ 1 , we denote Bpr ( x ) as the ` p norm ball of radius r centered at x . Given two sequences { an } and { bn } , we write an = O ( bn ) if there exists a constant 0 < C < +∞ such that an ≤ C bn . We denote an = Ω ( bn ) if bn = O ( an ) . We denote an = Θ ( bn ) if an = O ( bn ) and an = Ω ( bn ) . 2 RELATED WORK . There exists a large body of works on adversarial training , implicit bias and benign overfitting . In this section , we review the most relevant works with ours . Adversarial Training . Adversarial training ( Madry et al. , 2018 ) and its variants ( Zhang et al. , 2019 ; Wang et al. , 2019 ; 2020 ) are currently the most effective type of approaches to empirically defend against adversarial examples ( Szegedy et al. , 2013 ; Goodfellow et al. , 2015 ) . And many attempts have been made to understand its empirical success . Charles et al . ( 2019 ) ; Li et al . ( 2020 ) showed that the adversarially trained linear classifier directionally converges to the maximum margin classifier . Gao et al . ( 2019 ) ; Zhang et al . ( 2020b ) showed that adversarial training with neural networks can achieve low robust training loss . Yet these conclusions can not explain the test ( population ) performances . Another line of research focuses on the generalization performance of adversarial training and the number of training samples . Schmidt et al . ( 2018 ) showed that adversarial models require more data than standard models to achieve certain test accuracy . Chen et al . ( 2020 ) showed that more data may actually increase the gap between the generalization error of adversarially-trained models and standard models . Yin et al . ( 2019 ) ; Cullina et al . ( 2018 ) studied the adversarial Rademacher complexity and VC-dimensions . Some other works focus on the trade-off between robustness and natural accuracy ( Zhang et al. , 2019 ; Tsipras et al. , 2019 ; Wu et al. , 2020 ; Raghunathan et al. , 2020 ; Yang et al. , 2020 ; Dobriban et al. , 2020 ; Javanmard & Soltanolkotabi , 2020 ) , adversarial model complexity lower bound ( Allen-Zhu & Li , 2020 ) , as well as the provable robustness upper bound ( Fawzi et al. , 2018 ; Zhang et al. , 2020a ) . Recently , some works also focus on studying the learning of robust halfspaces and linear models . Montasser et al . ( 2020 ) studied the conditions on the adversarial perturbation sets under which halfspaces are robustly learnable in the presence of random label noise . Diakonikolas et al . ( 2020 ) studied the computational complexity of adversarially robust halfspaces under ` p norm perturbations . Zou et al . ( 2021 ) showed that adversarially trained halfspaces are provably robust with low robust classification error in the presence of noise . Dan et al . ( 2020 ) proposed an adversarial signal to noise ratio and studied the excess risk lower/upper bounds for learning Gaussian mixture models . Taheri et al . ( 2020 ) ; Javanmard & Soltanolkotabi ( 2020 ) studied adversarial learning of linear models on Gaussian mixture data where the data dimension and the number of training data points have a fixed ratio . Implicit Bias . Several recent works studied the implicit bias of various training algorithms in overparameterized models . Soudry et al . ( 2018 ) studied the implicit bias of gradient descent trained on linearly separable data while Ji & Telgarsky ( 2019b ) studied the non-separable case . Gunasekar et al . ( 2018a ) studied the implicit bias of various optimization methods in linear regression and classification problems . Ji & Telgarsky ( 2019a ) studied the implicit bias for deep linear networks and Arora et al . ( 2019 ) ; Gunasekar et al . ( 2018b ) studied the implicit bias for matrix factorization . Lyu & Li ( 2020 ) studied the implicit regularization of homogeneous neural networks with exponential loss and logistic loss . Benign Overfitting and Double Descent . A series of recent works have studied the “ benign overfitting ” phenomenon Bartlett et al . ( 2020 ) that when training over-parameterized models , classifiers can still achieve good population risk even when overfitting the noisy training data . Bartlett et al . ( 2020 ) ; Tsigler & Bartlett ( 2020 ) studied the risk bounds for over-parameterized linear ( ridge ) regression and showed that under certain settings , the interpolating linear model with minimum parameter norm can have asymptotically optimal risk . Chatterji & Long ( 2020 ) ; Cao et al . ( 2021 ) ; Wang & Thrampoulidis ( 2021 ) studied the risk bounds in linear logistic regression and linear support vector machines . Belkin et al . ( 2018 ; 2019a ; b ) ; Hastie et al . ( 2019 ) ; Wu & Xu ( 2020 ) further quantified the dependency curve between the population risk and the degree of over-parameterization and showed that the curve has a double-descent shape . | This paper studies the clean test and adversarial test error obtained using _Gradient Descent Adversarial Training_ (GDAT). The data distribution is the gaussian mixture model and the hypothesis class is lilinear classifiers. In Theorem 4.4, the authors show that the clean test error is worse than than the inherent noise rate. However, benign overfitting still exists though a larger $\epsilon$ for the adversarial training hurts clean error. In Theorem 4.8, the authors upper bound the robust test error obtained by GDAT. The result shows that adversarial error is certainly worse off than clean test error. Perhaps somewhat interesting this also shows that the perturbation radius needs to decrease with increasing $d$. | SP:4d60f820ffd5957c27b781138cf865ec86112d8a |
Benign Overfitting in Adversarially Robust Linear Classification | 1 INTRODUCTION . Modern machine learning methods such as deep learning have made many breakthroughs in a variety of application domains , including image classification ( He et al. , 2016 ; Krizhevsky et al. , 2012 ) , speech recognition ( Hinton et al. , 2012 ) and etc . These models are typically over-parameterized : the number of model parameters far exceeds the size of the training samples . One mystery is that , these over-parameterized models can memorize noisy training data and yet still achieve quite good generalization performances on the test data ( Zhang et al. , 2017 ) . Many efforts have been made to explain this striking phenomenon , which against what the classical notion of overfitting might suggest . A line of research works ( Soudry et al. , 2018 ; Ji & Telgarsky , 2019b ; Nacson et al. , 2019 ; Gunasekar et al. , 2018b ; a ) shows that there exists the so-called implicit bias ( Neyshabur , 2017 ) : the training algorithms tend to converge to certain kinds of solutions even with no explicit regularization . Specifically , Soudry et al . ( 2018 ) ; Ji & Telgarsky ( 2019b ) ; Nacson et al . ( 2019 ) demonstrate that gradient descent trained linear classifiers on logistic or exponential loss with no regularization asymptotically converge to the maximum L2 margin classifier . Recent works ( Bartlett et al. , 2020 ; Chatterji & Long , 2020 ; Cao et al. , 2021 ; Wang & Thrampoulidis , 2021 ; Tsigler & Bartlett , 2020 ) further shows that over-parameterized and implicitly regularized interpolators can indeed achieve small test error , and formulate this phenomenon as “ benign overfitting ” . More concretely , suppose the classification model f is parameterized by θ ∈ Θ and the loss is denoted as ` ( · ) . The population risk is define as P ( x , y ) ∼D [ fθ ( x ) 6= y ] , where data pair ( x , y ) is generated from certain data generation model . Chatterji & Long ( 2020 ) shows that with sufficient over-parameterization , gradient descent trained maximum L2 margin classifier can achieve nearly optimal population risk on noisy data for data generated from a sub-Gaussian mixture model . This suggests that the overfitting can be “ benign ” in the overparameterized setting . Besides these studies on the benign overfitting phenomenon , another well-known feature of modern machine learning methods is that they are vulnerable to adversarial examples . Recent studies ( Szegedy et al. , 2013 ; Goodfellow et al. , 2015 ) show that modern machine learning systems are brittle : slight input perturbation that is imperceptible to human eyes could mislead a well-trained classifier into wrong classification result . These malicious inputs are also known as the adversarial examples ( Szegedy et al. , 2013 ; Goodfellow et al. , 2015 ) . Adversarial examples raise severe trustworthy issues and security concerns on the current machine learning systems especially in securitycritical applications . Various methods ( Kurakin et al. , 2016 ; Madry et al. , 2018 ; Zhang et al. , 2019 ; Wang et al. , 2019 ; 2020 ) have been proposed to defend against the threats posed by adversarial examples . One of the notable approaches is adversarial training ( Madry et al. , 2018 ) . Specifically , adversarial training solves the following min-max optimization problem , min θ∈Θ 1 n n∑ i=1 max x′i∈B p ( xi ) ` ( fθ ( x ′ i ) , yi ) , where { ( xi , yi ) } ni=1 is the training set and Bp ( xi ) = { x : ‖x−xi‖p ≤ } denotes the -ball around xi in ` p norm ( p ≥ 1 ) . Many empirical or theoretical studies have been conducted trying to analyze or further improve adversarial training robustness ( Zhang et al. , 2019 ; Rice et al. , 2020 ; Wang et al. , 2020 ; Carmon et al. , 2019 ; Wang et al. , 2019 ; Raghunathan et al. , 2020 ) . A recent work ( Sanyal et al. , 2021 ) also pointed out that normally trained interpolators with the presence of label noise are unlikely to be adversarially robust , while adversarially robust classifiers can not overfit noisy labels under certain conditions . However , it is still not clear whether the benign overfitting phenomenon occurs for extremely over-parameterized models in the presence of adversarial examples . In this paper , we show that benign overfitting indeed occurs in adversarial training . In order to properly characterize the benign overfitting phenomenon on adversarial training , we also define the population adversarial risk , which is the counterpart for population risk in standard training scenario : P ( x , y ) ∼D [ ∃x′ ∈ Bp ( x ) s.t. , fθ ( x′ ) 6= y ] . The adversarial risk measures the misclassification rate of the target classifier under the presence of ` p-norm adversarial perturbations . It is easy to observe that the adversarial risk is always larger than standard risk as it requires the classifier to correctly classify the data examples within the entire local ` p norm ball . We summarize our contributions of this paper in the following • We show that the benign overfitting phenomenon can occur in adversarially robust linear classifiers with sufficient over-parameterization . Specifically , under moderate ` p norm perturbations , adversarially trained linear classifiers can achieve the near-optimal standard and adversarial risks , in spite of overfitting the noisy training data . • When the perturbation strength is set to be 0 , our adversarial risk bound reduces to the standard one . The resulting standard risk bound extends Chatterji & Long ( 2020 ) ’ s risk bound to further characterize the behavior of the linear classifier trained by t-step gradient descent . • We show that depending on the value of p ( perturbation norm ) , the adversarial risk bound can be different . The higher value of p ( typically for p ≥ 2 case ) actually leads to a larger gap between the adversarial risk and the standard risk with the same . Notation . we use lower case letters to denote scalars and lower case bold face letters to denote vectors . For a vector x ∈ Rd , we denote its ` p norm ( p ≥ 1 ) of x by ‖x‖p = ( ∑d i=1 |xi|p ) 1/p , the ` ∞ norm of x by ‖x‖∞ = maxdi=1 |xi| . We denote x◦p as the element-wise p-power of x . For p ≥ 1 , we denote Bpr ( x ) as the ` p norm ball of radius r centered at x . Given two sequences { an } and { bn } , we write an = O ( bn ) if there exists a constant 0 < C < +∞ such that an ≤ C bn . We denote an = Ω ( bn ) if bn = O ( an ) . We denote an = Θ ( bn ) if an = O ( bn ) and an = Ω ( bn ) . 2 RELATED WORK . There exists a large body of works on adversarial training , implicit bias and benign overfitting . In this section , we review the most relevant works with ours . Adversarial Training . Adversarial training ( Madry et al. , 2018 ) and its variants ( Zhang et al. , 2019 ; Wang et al. , 2019 ; 2020 ) are currently the most effective type of approaches to empirically defend against adversarial examples ( Szegedy et al. , 2013 ; Goodfellow et al. , 2015 ) . And many attempts have been made to understand its empirical success . Charles et al . ( 2019 ) ; Li et al . ( 2020 ) showed that the adversarially trained linear classifier directionally converges to the maximum margin classifier . Gao et al . ( 2019 ) ; Zhang et al . ( 2020b ) showed that adversarial training with neural networks can achieve low robust training loss . Yet these conclusions can not explain the test ( population ) performances . Another line of research focuses on the generalization performance of adversarial training and the number of training samples . Schmidt et al . ( 2018 ) showed that adversarial models require more data than standard models to achieve certain test accuracy . Chen et al . ( 2020 ) showed that more data may actually increase the gap between the generalization error of adversarially-trained models and standard models . Yin et al . ( 2019 ) ; Cullina et al . ( 2018 ) studied the adversarial Rademacher complexity and VC-dimensions . Some other works focus on the trade-off between robustness and natural accuracy ( Zhang et al. , 2019 ; Tsipras et al. , 2019 ; Wu et al. , 2020 ; Raghunathan et al. , 2020 ; Yang et al. , 2020 ; Dobriban et al. , 2020 ; Javanmard & Soltanolkotabi , 2020 ) , adversarial model complexity lower bound ( Allen-Zhu & Li , 2020 ) , as well as the provable robustness upper bound ( Fawzi et al. , 2018 ; Zhang et al. , 2020a ) . Recently , some works also focus on studying the learning of robust halfspaces and linear models . Montasser et al . ( 2020 ) studied the conditions on the adversarial perturbation sets under which halfspaces are robustly learnable in the presence of random label noise . Diakonikolas et al . ( 2020 ) studied the computational complexity of adversarially robust halfspaces under ` p norm perturbations . Zou et al . ( 2021 ) showed that adversarially trained halfspaces are provably robust with low robust classification error in the presence of noise . Dan et al . ( 2020 ) proposed an adversarial signal to noise ratio and studied the excess risk lower/upper bounds for learning Gaussian mixture models . Taheri et al . ( 2020 ) ; Javanmard & Soltanolkotabi ( 2020 ) studied adversarial learning of linear models on Gaussian mixture data where the data dimension and the number of training data points have a fixed ratio . Implicit Bias . Several recent works studied the implicit bias of various training algorithms in overparameterized models . Soudry et al . ( 2018 ) studied the implicit bias of gradient descent trained on linearly separable data while Ji & Telgarsky ( 2019b ) studied the non-separable case . Gunasekar et al . ( 2018a ) studied the implicit bias of various optimization methods in linear regression and classification problems . Ji & Telgarsky ( 2019a ) studied the implicit bias for deep linear networks and Arora et al . ( 2019 ) ; Gunasekar et al . ( 2018b ) studied the implicit bias for matrix factorization . Lyu & Li ( 2020 ) studied the implicit regularization of homogeneous neural networks with exponential loss and logistic loss . Benign Overfitting and Double Descent . A series of recent works have studied the “ benign overfitting ” phenomenon Bartlett et al . ( 2020 ) that when training over-parameterized models , classifiers can still achieve good population risk even when overfitting the noisy training data . Bartlett et al . ( 2020 ) ; Tsigler & Bartlett ( 2020 ) studied the risk bounds for over-parameterized linear ( ridge ) regression and showed that under certain settings , the interpolating linear model with minimum parameter norm can have asymptotically optimal risk . Chatterji & Long ( 2020 ) ; Cao et al . ( 2021 ) ; Wang & Thrampoulidis ( 2021 ) studied the risk bounds in linear logistic regression and linear support vector machines . Belkin et al . ( 2018 ; 2019a ; b ) ; Hastie et al . ( 2019 ) ; Wu & Xu ( 2020 ) further quantified the dependency curve between the population risk and the degree of over-parameterization and showed that the curve has a double-descent shape . | Learning with an over-parametrized model is an important problem in machine learning, and the benign overfitting problem garnered attention. The results shown in this paper has a significant implication in this context. In this paper, the authors investigate the phenomenon of benign overfitting in adversarial training. Surprisingly, the authors report that the benign overfitting phenomenon can be observed even in the case where we use adversarial training. The authors' approach is reasonable and should serve as a reference for future research. | SP:4d60f820ffd5957c27b781138cf865ec86112d8a |
Benign Overfitting in Adversarially Robust Linear Classification | 1 INTRODUCTION . Modern machine learning methods such as deep learning have made many breakthroughs in a variety of application domains , including image classification ( He et al. , 2016 ; Krizhevsky et al. , 2012 ) , speech recognition ( Hinton et al. , 2012 ) and etc . These models are typically over-parameterized : the number of model parameters far exceeds the size of the training samples . One mystery is that , these over-parameterized models can memorize noisy training data and yet still achieve quite good generalization performances on the test data ( Zhang et al. , 2017 ) . Many efforts have been made to explain this striking phenomenon , which against what the classical notion of overfitting might suggest . A line of research works ( Soudry et al. , 2018 ; Ji & Telgarsky , 2019b ; Nacson et al. , 2019 ; Gunasekar et al. , 2018b ; a ) shows that there exists the so-called implicit bias ( Neyshabur , 2017 ) : the training algorithms tend to converge to certain kinds of solutions even with no explicit regularization . Specifically , Soudry et al . ( 2018 ) ; Ji & Telgarsky ( 2019b ) ; Nacson et al . ( 2019 ) demonstrate that gradient descent trained linear classifiers on logistic or exponential loss with no regularization asymptotically converge to the maximum L2 margin classifier . Recent works ( Bartlett et al. , 2020 ; Chatterji & Long , 2020 ; Cao et al. , 2021 ; Wang & Thrampoulidis , 2021 ; Tsigler & Bartlett , 2020 ) further shows that over-parameterized and implicitly regularized interpolators can indeed achieve small test error , and formulate this phenomenon as “ benign overfitting ” . More concretely , suppose the classification model f is parameterized by θ ∈ Θ and the loss is denoted as ` ( · ) . The population risk is define as P ( x , y ) ∼D [ fθ ( x ) 6= y ] , where data pair ( x , y ) is generated from certain data generation model . Chatterji & Long ( 2020 ) shows that with sufficient over-parameterization , gradient descent trained maximum L2 margin classifier can achieve nearly optimal population risk on noisy data for data generated from a sub-Gaussian mixture model . This suggests that the overfitting can be “ benign ” in the overparameterized setting . Besides these studies on the benign overfitting phenomenon , another well-known feature of modern machine learning methods is that they are vulnerable to adversarial examples . Recent studies ( Szegedy et al. , 2013 ; Goodfellow et al. , 2015 ) show that modern machine learning systems are brittle : slight input perturbation that is imperceptible to human eyes could mislead a well-trained classifier into wrong classification result . These malicious inputs are also known as the adversarial examples ( Szegedy et al. , 2013 ; Goodfellow et al. , 2015 ) . Adversarial examples raise severe trustworthy issues and security concerns on the current machine learning systems especially in securitycritical applications . Various methods ( Kurakin et al. , 2016 ; Madry et al. , 2018 ; Zhang et al. , 2019 ; Wang et al. , 2019 ; 2020 ) have been proposed to defend against the threats posed by adversarial examples . One of the notable approaches is adversarial training ( Madry et al. , 2018 ) . Specifically , adversarial training solves the following min-max optimization problem , min θ∈Θ 1 n n∑ i=1 max x′i∈B p ( xi ) ` ( fθ ( x ′ i ) , yi ) , where { ( xi , yi ) } ni=1 is the training set and Bp ( xi ) = { x : ‖x−xi‖p ≤ } denotes the -ball around xi in ` p norm ( p ≥ 1 ) . Many empirical or theoretical studies have been conducted trying to analyze or further improve adversarial training robustness ( Zhang et al. , 2019 ; Rice et al. , 2020 ; Wang et al. , 2020 ; Carmon et al. , 2019 ; Wang et al. , 2019 ; Raghunathan et al. , 2020 ) . A recent work ( Sanyal et al. , 2021 ) also pointed out that normally trained interpolators with the presence of label noise are unlikely to be adversarially robust , while adversarially robust classifiers can not overfit noisy labels under certain conditions . However , it is still not clear whether the benign overfitting phenomenon occurs for extremely over-parameterized models in the presence of adversarial examples . In this paper , we show that benign overfitting indeed occurs in adversarial training . In order to properly characterize the benign overfitting phenomenon on adversarial training , we also define the population adversarial risk , which is the counterpart for population risk in standard training scenario : P ( x , y ) ∼D [ ∃x′ ∈ Bp ( x ) s.t. , fθ ( x′ ) 6= y ] . The adversarial risk measures the misclassification rate of the target classifier under the presence of ` p-norm adversarial perturbations . It is easy to observe that the adversarial risk is always larger than standard risk as it requires the classifier to correctly classify the data examples within the entire local ` p norm ball . We summarize our contributions of this paper in the following • We show that the benign overfitting phenomenon can occur in adversarially robust linear classifiers with sufficient over-parameterization . Specifically , under moderate ` p norm perturbations , adversarially trained linear classifiers can achieve the near-optimal standard and adversarial risks , in spite of overfitting the noisy training data . • When the perturbation strength is set to be 0 , our adversarial risk bound reduces to the standard one . The resulting standard risk bound extends Chatterji & Long ( 2020 ) ’ s risk bound to further characterize the behavior of the linear classifier trained by t-step gradient descent . • We show that depending on the value of p ( perturbation norm ) , the adversarial risk bound can be different . The higher value of p ( typically for p ≥ 2 case ) actually leads to a larger gap between the adversarial risk and the standard risk with the same . Notation . we use lower case letters to denote scalars and lower case bold face letters to denote vectors . For a vector x ∈ Rd , we denote its ` p norm ( p ≥ 1 ) of x by ‖x‖p = ( ∑d i=1 |xi|p ) 1/p , the ` ∞ norm of x by ‖x‖∞ = maxdi=1 |xi| . We denote x◦p as the element-wise p-power of x . For p ≥ 1 , we denote Bpr ( x ) as the ` p norm ball of radius r centered at x . Given two sequences { an } and { bn } , we write an = O ( bn ) if there exists a constant 0 < C < +∞ such that an ≤ C bn . We denote an = Ω ( bn ) if bn = O ( an ) . We denote an = Θ ( bn ) if an = O ( bn ) and an = Ω ( bn ) . 2 RELATED WORK . There exists a large body of works on adversarial training , implicit bias and benign overfitting . In this section , we review the most relevant works with ours . Adversarial Training . Adversarial training ( Madry et al. , 2018 ) and its variants ( Zhang et al. , 2019 ; Wang et al. , 2019 ; 2020 ) are currently the most effective type of approaches to empirically defend against adversarial examples ( Szegedy et al. , 2013 ; Goodfellow et al. , 2015 ) . And many attempts have been made to understand its empirical success . Charles et al . ( 2019 ) ; Li et al . ( 2020 ) showed that the adversarially trained linear classifier directionally converges to the maximum margin classifier . Gao et al . ( 2019 ) ; Zhang et al . ( 2020b ) showed that adversarial training with neural networks can achieve low robust training loss . Yet these conclusions can not explain the test ( population ) performances . Another line of research focuses on the generalization performance of adversarial training and the number of training samples . Schmidt et al . ( 2018 ) showed that adversarial models require more data than standard models to achieve certain test accuracy . Chen et al . ( 2020 ) showed that more data may actually increase the gap between the generalization error of adversarially-trained models and standard models . Yin et al . ( 2019 ) ; Cullina et al . ( 2018 ) studied the adversarial Rademacher complexity and VC-dimensions . Some other works focus on the trade-off between robustness and natural accuracy ( Zhang et al. , 2019 ; Tsipras et al. , 2019 ; Wu et al. , 2020 ; Raghunathan et al. , 2020 ; Yang et al. , 2020 ; Dobriban et al. , 2020 ; Javanmard & Soltanolkotabi , 2020 ) , adversarial model complexity lower bound ( Allen-Zhu & Li , 2020 ) , as well as the provable robustness upper bound ( Fawzi et al. , 2018 ; Zhang et al. , 2020a ) . Recently , some works also focus on studying the learning of robust halfspaces and linear models . Montasser et al . ( 2020 ) studied the conditions on the adversarial perturbation sets under which halfspaces are robustly learnable in the presence of random label noise . Diakonikolas et al . ( 2020 ) studied the computational complexity of adversarially robust halfspaces under ` p norm perturbations . Zou et al . ( 2021 ) showed that adversarially trained halfspaces are provably robust with low robust classification error in the presence of noise . Dan et al . ( 2020 ) proposed an adversarial signal to noise ratio and studied the excess risk lower/upper bounds for learning Gaussian mixture models . Taheri et al . ( 2020 ) ; Javanmard & Soltanolkotabi ( 2020 ) studied adversarial learning of linear models on Gaussian mixture data where the data dimension and the number of training data points have a fixed ratio . Implicit Bias . Several recent works studied the implicit bias of various training algorithms in overparameterized models . Soudry et al . ( 2018 ) studied the implicit bias of gradient descent trained on linearly separable data while Ji & Telgarsky ( 2019b ) studied the non-separable case . Gunasekar et al . ( 2018a ) studied the implicit bias of various optimization methods in linear regression and classification problems . Ji & Telgarsky ( 2019a ) studied the implicit bias for deep linear networks and Arora et al . ( 2019 ) ; Gunasekar et al . ( 2018b ) studied the implicit bias for matrix factorization . Lyu & Li ( 2020 ) studied the implicit regularization of homogeneous neural networks with exponential loss and logistic loss . Benign Overfitting and Double Descent . A series of recent works have studied the “ benign overfitting ” phenomenon Bartlett et al . ( 2020 ) that when training over-parameterized models , classifiers can still achieve good population risk even when overfitting the noisy training data . Bartlett et al . ( 2020 ) ; Tsigler & Bartlett ( 2020 ) studied the risk bounds for over-parameterized linear ( ridge ) regression and showed that under certain settings , the interpolating linear model with minimum parameter norm can have asymptotically optimal risk . Chatterji & Long ( 2020 ) ; Cao et al . ( 2021 ) ; Wang & Thrampoulidis ( 2021 ) studied the risk bounds in linear logistic regression and linear support vector machines . Belkin et al . ( 2018 ; 2019a ; b ) ; Hastie et al . ( 2019 ) ; Wu & Xu ( 2020 ) further quantified the dependency curve between the population risk and the degree of over-parameterization and showed that the curve has a double-descent shape . | The paper considers the analysis of benign overfitting in linear regression, first described in Bartlett et al. (2020), and extends it to the case of adversarial linear classification. Previously, Chatterji & Long (2020) had studied the non-adversarial classification case, so the main new result is an extension to the adversarial case. | SP:4d60f820ffd5957c27b781138cf865ec86112d8a |
Partial Information as Full: Reward Imputation with Sketching in Bandits | 1 INTRODUCTION . Contextual bandits have been widely used in real-world sequential decision-making problems ( Li et al. , 2010 ; Lan & Baraniuk , 2016 ; Yom-Tov et al. , 2017 ; Yang et al. , 2021 ) , where the agent updates the decision-making policy fully online ( i.e. , at each step ) according to the context and corresponding reward feedback so as to maximize the cumulative reward . In this paper , we consider a more complex setting—contextual batched bandits ( CBB ) , where the decision process is partitioned into N episodes , the agent interacts with the environment for B steps in one episode , collects the reward feedbacks and contexts at the end of the episode , and then updates the policy using the collected data for the next episode . CBB is more practical in some real-world applications , since updating the policy once receiving the reward feedback is rather unrealistic due to its high computational cost and decision instability . In bandit settings , it is inevitable that the environment only reveals the rewards of the executed actions to the agent as the feedbacks , while hiding the rewards of non-executed actions . We refer to this category of limited feedback as the partial-information feedback ( also called “ bandit feedback ” ) . Existing batched bandit approaches in the CBB setting discard the information contained in the potential rewards of the non-executed actions , address the problem of partial-information feedback using an exploitation-exploration tradeoff on the context space and reward space ( Han et al. , 2020 ; Zhang et al. , 2020 ) .But in the CBB setting , the agent usually estimates and maintains reward models for the action-selection policy , and the potential rewards of the non-executed actions have been somehow captured by the policy . This additional reward structure information is estimated and available in each episode , however , are not utilized by existing batched bandit approaches . In contextual bandit settings where the policy is updated fully online , several bias-correction approaches have been introduced to address the partial-information feedback . Dimakopoulou et al . ( 2019 ) presented linear contextual bandits integrating the balancing approach from causal inference , which reweight the contexts and rewards by the inverse propensity scores . Chou et al . ( 2015 ) designed pseudo-reward algorithms for contextual bandits , which use a direct method to estimate the unobserved rewards for the upper confidence bound ( UCB ) strategy . Kim & Paik ( 2019 ) focused on the feedback bias-correction for LASSO bandit with high-dimensional contexts , and applied the doubly-robust approach to the reward modification using average contexts . Although these approaches have been demonstrated to be effective in contextual bandit settings , little efforts have been spent to address the under-utilization of partial-information feedback in the CBB setting . Theoretical and experimental analyses in Section 2 indicate that better performance of CBB is achievable if the rewards of the non-executed actions can be received . Motivated by these observations , we propose a novel reward imputation approach for the non-executed actions , which mimics the reward generation mechanisms of environments . We conclude our contributions as follows . • To fully utilize feedback information in CBB , we formulate the reward imputation as a problem of imputation regularized ridge regression , where the policy can be updated efficiently using sketching . • We prove that our reward imputation approach obtains a relative-error bound for sketching approximation , achieves an instantaneous regret with a controllable bias and a smaller variance than that without reward imputation , has a lower bound of the sketch size independently of the overall number of steps , enjoys a sublinear regret bound against the optimal policy , and reduces the time complexity from O ( Bd2 ) to O ( cd2 ) for each action in one episode , where B denotes the batch size , c the sketch size , and d the dimensionality of inputs , satisfying d < c < B . • We present two practical variants of our reward imputation approach , including the rate-scheduled version in which the imputation rate is set without tuning , and the version for nonlinear rewards . • We carried out extensive experiments on the synthetic data , public benchmark , and the data collected from a real commercial product to demonstrate our performance , empirically analyzed the influence of different parameters , and verified the correctness of the theoretical results . Related Work . Recently , batched bandit has become an active research topic in statistics and learning theory including 2-armed bandit ( Perchet et al. , 2016 ) , multi-armed bandit ( Gao et al. , 2019 ; Zhang et al. , 2020 ; Wang & Cheng , 2020 ) , and contextual bandit ( Han et al. , 2020 ; Ren & Zhou , 2020 ; Gu et al. , 2021 ) . Han et al . ( 2020 ) defined linear contextual bandits , and designed UCB-type algorithms for both stochastic and adversarial contexts , where true rewards of different actions have the same parameters . Zhang et al . ( 2020 ) provided methods for inference on data collected in batches using bandits , and introduced a batched least squares estimator for both multi-arm and contextual bandits . Recently , Esfandiari et al . ( 2021 ) proved refined regret upper bounds of batched bandits in stochastic and adversarial settings . There are several recent works that consider similar settings to CBB , e.g. , episodic Markov decision process ( Jin et al. , 2018 ) , LASSO bandits ( Wang & Cheng , 2020 ) . Sketching is another related technology that compresses a large matrix to a much smaller one by multiplying a ( usually ) random matrix with certain properties ( Woodruff , 2014 ) , which has been used in online convex optimization ( Calandriello et al. , 2017 ; Zhang & Liao , 2019 ) . 2 PROBLEM FORMULATION AND ANALYSIS . First , we introduce some notations . Let [ x ] = { 1 , 2 , . . . , x } , S ⊆ Rd be the context space , A = { Aj } j∈ [ M ] the action space containing M actions , [ A ; B ] = [ Aᵀ , Bᵀ ] ᵀ , ‖A‖F , ‖A‖1 ‖A‖2 denote the Frobenius norm , 1-norm , and spectral norm of a matrix A , respectively , ‖a‖1 and ‖a‖2 be the ` 1-norm and the ` 2-norm of a vector a , σmin ( A ) and σmax ( A ) denote the minimum and maximum of the of singular values of A . In this paper , we focus on the setting of Contextual Batched Bandits ( CBB ) ( see Algorithm 1 ) , where the decision process is partitioned into N episodes , and in each episode , CBB consists of two phases : 1 ) the policy updating approximates the optimal policy based on the received contexts and rewards ; 2 ) the online decision selects actions for execution following the updated and fixed policy p for B steps ( also called the batch size is B ) , and stores the contextaction pairs and the observed rewards of the executed actions into a data buffer D. The reward R in CBB is a partial-information feedback where rewards are unobserved for the non-executed actions . Different from existing batch bandit setting ( Han et al. , 2020 ; Esfandiari et al. , 2021 ) , where the true reward feedbacks for all actions are controlled by the same parameter vector while the contexts received differ by actions at each step , we assume that in the CBB setting , the mechanism of true reward feedback differs by actions and the received context is shared by actions . Formally , for any context si ∈ S ⊆ Rd and action A ∈ A , we assume that the expectation of the true reward Rtruei , A Algorithm 1 Contextual Batched Bandit ( CBB ) INPUT : Batch size B , number of episodes N , action space A = { Aj } j∈ [ M ] , context space S ⊆ Rd 1 : Initialize policy p0 ← 1/M , sample data buffer D1 = { ( s0 , b , AI0 , b , R0 , b ) } b∈ [ B ] using initial policy p0 2 : for n = 1 to N do 3 : Update the policy pn on Dn { Policy Updating } 4 : for b = 1 to B do 5 : Observe context sn , b 6 : Choose AIn , b ∈ A following the updated policy pn ( sn , b ) { Online Decision } 7 : end for 8 : Dn+1 ← { ( sn , b , AIn , b , Rn , b } b∈ [ B ] , where Rn , b denotes the reward of action AIn , b on context sn , b 9 : end for is determined by an unknown action-specific reward parameter vector θ∗A ∈ Rd : E [ Rtruei , A | si ] = 〈θ∗A , si〉 ( the linear reward will be extended to the nonlinear case in Section 5 ) . This setting for reward feedback matches many real-world applications , e.g. , each action corresponds to a different category of candidate coupons in coupon recommendation , and the reward feedback mechanism of each category differs due to the different discount pricing strategies . Next , we provide deeper understandings of the influence of unobserved feedbacks on the performance of policy updating in the CBB setting . We first conducted an empirical comparison by applying the batch UCB policy ( Han et al. , 2020 ) to environments under different proportions of received reward feedbacks . In particular , the agent under full-information feedback can receive all the rewards of the executed and non-executed actions , called Full-Information CBB ( FICBB ) setting . From Figure 1 , we can observe that the partial-information feedbacks could be damaging in terms of hurting the policy updating , and batched bandit could benefit from more reward feedbacks , where the performance of 80 % feedback is very close to that of FI-CBB . Then , we demonstrate the difference of instantaneous regrets between the CBB and FI-CBB settings as in Theorem 1 . The detailed description and proof of Theorem 1 can be found in Appendix A. Theorem 1 . For any action A ∈ A and context si ∈ S , let θnA be the reward parameter vector estimated by the batched UCB policy in the nth episode . The upper bound of instantaneous regret ( defined by |〈θnA , si〉 − 〈θ∗A , si〉| ) in the FI-CBB setting is tighter than that in CBB setting ( i.e. , using the partial-information feedback ) . From Theorem 1 , we can conclude that the price paid for having access only to the partial-information feedbacks is the deterioration in the regret bound . Ideally , the policy would be updated using the fullinformation feedback . In CBB , however , the fullinformation feedback is unaccessible . Fortunately , in CBB , different reward parameter vectors need to be maintained and estimated separately for each action , and the potential reward structures of the non-executed actions have been somehow captured . So why don ’ t we use these maintained reward parameters to estimate the unknown rewards for the non-executed actions ? Next , we propose an efficient reward imputation approach that uses this additional reward structure information for improving the performance of the bandit policy . 3 EFFICIENT REWARD IMPUTATION FOR POLICY UPDATING . In this section , we present an efficient reward imputation approach tailored for policy updating in the CBB setting . Formulation of Reward Imputation . Since the true reward parameters differ by actions in the CBB setting , we need to maintain a different estimated reward parameter vector for each action . As shown in Figure 2 , in contrast to CBB that ignores the contexts and rewards of the non-executed steps of Aj , our reward imputation approach completes the missing values using the imputed contexts and rewards , approximating the full-information CBB setting . Specifically , in the ( n+1 ) -th episode , for each action Aj ∈ A , j ∈ [ M ] , we store context vectors and rewards corresponding to the steps in which the action Aj is executed , into a context matrix SnAj ∈ R Nnj ×d and a reward vector RnAj ∈ RN n j , respectively , whereNnj denotes the number of steps ( in episode n+1 ) in which the actionAj is executed . Additionally , for anyAj ∈ A , j ∈ [ M ] , we store context vectors corresponding to the nonexecuted steps of action Aj ( denote the number of non-executed steps by N̂nj , i.e. , N̂ n j = B−Nnj ) , into an imputed context matrix ŜnAj ∈ R N̂nj ×d , and compute the imputed reward vector as follows : R̂nAj = { rn,1 ( Aj ) , rn,2 ( Aj ) , . . . , rn , N̂nj ( Aj ) } ∈ R N̂nj , j ∈ [ M ] , where rn , b ( Aj ) : = 〈θ̄nAj , sn , b〉 denotes the imputed reward for each step b ∈ [ N̂ n j ] , and sn , b is the b-th row of ŜnAj . Then , we obtain several block matrices by concatenating the context and reward matrices from the previous episodes : LnAj = [ S 0 Aj ; · · · ; SnAj ] ∈ R Lnj ×d , T nAj = [ R 0 Aj ; · · · ; RnAj ] ∈ RL n j , Lnj = ∑n k=0N k j , L̂ n Aj = [ Ŝ0Aj ; · · · ; Ŝ n Aj ] ∈ RL̂ n j ×d , T̂ nAj = [ R̂ 0 Aj ; · · · ; R̂nAj ] ∈ R L̂nj , L̂nj =∑n k=0 N̂ k j . In the ( n + 1 ) -th episode , the estimated parameter vector θ̄ n+1 Aj of the imputed reward for action Aj can be updated by solving the following imputation regularized ridge regression : θ̄n+1Aj = arg min θ∈Rd ∥∥∥LnAjθ − T nAj∥∥∥2 2︸ ︷︷ ︸ Observed Term + γ ∥∥∥L̂nAjθ − T̂ nAj∥∥∥2 2︸ ︷︷ ︸ Imputation Term +λ‖θ‖22 , n = 0 , 1 , . . . , N − 1 , ( 1 ) where γ ∈ [ 0 , 1 ] is the imputation rate which controls the degree of reward imputation and measures a trade-off between bias and variance ( Remark 1 & 2 ) , λ > 0 is the regularization parameter , yielding θ̄n+1Aj = ( Ψn+1Aj ) −1 ( bn+1Aj + γb̂ n+1 Aj ) , ( 2 ) that can be obtained by the closed least squares solution , where Ψn+1Aj : = λId + Φ n+1 Aj + γΦ̂n+1Aj , Φn+1Aj = Φ n Aj + S nᵀ Aj SnAj , b n+1 Aj = bnAj + S nᵀ Aj RnAj , ( 3 ) Φ̂n+1Aj = ηΦ̂ n Aj + Ŝ nᵀ Aj ŜnAj , b̂ n+1 Aj = ηb̂nAj + Ŝ nᵀ Aj R̂nAj , ( 4 ) and η ∈ ( 0 , 1 ) is the discount parameter which controls how fast the previous imputed rewards are forgotten , and can help guaranteeing the regret bound in Theorem 2 . Efficient Reward Imputation using Sketching . As shown in the first 4 columns in Table 1 , the overall time complexity of the imputation for each action is O ( Bd2 ) in each episode , where B represents the batch size , and d the dimensionality of the input . Thus , for all the M actions in one episode , reward imputation increases the time complexity from O ( Bd2 ) of the approach without imputation to O ( MBd2 ) . To address this issue , we design an efficient reward imputation approach using sketching , which reduces the time complexity of each action in one episode from O ( Bd2 ) to O ( cd2 ) , where c denotes the sketch size satisfying d < c < B and cd > B . Specifically , in the ( n+ 1 ) -th episode , the imputation regularized ridge regression Eq . ( 1 ) can be approximated by a sketched ridge regression as θ̃n+1Aj = arg min θ∈Rd ∥∥∥ΠnAj ( LnAjθ − T nAj ) ∥∥∥2 2 + γ ∥∥∥Π̂nAj ( L̂nAjθ − T̂ nAj ) ∥∥∥2 2 + λ‖θ‖22 , ( 5 ) where θ̃n+1A denotes the estimated parameter vector of the imputed reward using sketching for action A ∈ A , CnAj ∈ R c×Nnj and ĈnAj ∈ R c×N̂nj are the sketch submatrices for the observed term and the imputation term , respectively , and the sketch matrices for the two terms can be represented as ΠnAj = [ C0Aj , C 1 Aj , · · · , C n Aj ] ∈ Rc×L n j , Π̂nAj = [ Ĉ0Aj , Ĉ 1 Aj , · · · , Ĉ n Aj ] ∈ Rc×L̂ n j . We denote the sketches of the context matrix and the reward vector by ΓnAj : = C n Aj SnAj ∈ R c×d and ΛnAj : = C n Aj RnAj ∈ R c , the sketches of the imputed context matrix and the imputed reward vector by Γ̂nAj : = Ĉ n Aj ŜnAj ∈ R c×d and Λ̂nAj : = Ĉ n Aj R̂nAj ∈ R c , and obtain the solution of Eq . ( 5 ) : θ̃n+1Aj = ( W n+1Aj ) −1 ( pn+1Aj + γp̂ n+1 Aj ) , ( 6 ) where η ∈ ( 0 , 1 ) denotes the discount parameter , W n+1Aj : = λId +G n+1 Aj + γĜn+1Aj , and Gn+1Aj = G n Aj + Γ nᵀ Aj ΓnAj , p n+1 Aj = pnAj + Γ nᵀ Aj ΛnAj , ( 7 ) Ĝn+1Aj = ηĜ n Aj + Γ̂ nᵀ Aj Γ̂nAj , p̂ n+1 Aj = ηp̂nAj + Γ̂ nᵀ Aj Λ̂nAj . ( 8 ) Using the parameter θ̃n+1Aj , we obtain the sketched version of imputed reward as r̃n , b ( Aj ) : = 〈θ̃nAj , sn , b〉 at step b ∈ [ N̂ n j ] . Finally , we specify that the sketch submatrices { CnA } A∈A , n∈ [ N ] and { ĈnAj } A∈A , n∈ [ N ] are the block construction of Sparser Johnson-Lindenstrauss Transform ( SJLT ) ( Kane & Nelson , 2014 ) , where the sketch size c is divisible by the number of blocks D1 . As shown in the last 4 columns in Table 1 , sketching reduces the time complexity fromO ( MBd2 ) toO ( Mcd2 ) for reward imputation of all M actions in one episode , where c < B . When Mc ≈ B , the overall time complexity of our reward imputation using sketching is even comparable to that without reward imputation which has a O ( Bd2 ) time complexity . Updated Policy using Imputed Rewards . Inspired by the UCB strategy ( Li et al. , 2010 ) , the updated policy for online decision of the ( n+ 1 ) -th episode can be formulated using the imputed rewards ( parameterized by θ̄n+1A in Eq . ( 2 ) ) or the sketched version of imputed rewards ( parameterized by θ̃n+1A in Eq. ( 6 ) ) . Specifically , for a new context s , origin policy p̄n+1 selects the action following A← arg max A∈A 〈θ̄n+1A , s〉+ ω [ sᵀ ( Ψ n+1 A ) −1s ] 1 2 , sketched policy p̃n+1 selects the action following A ← arg max A∈A 〈θ̃n+1A , s〉 + α [ sᵀ ( W n+1 A ) −1s ] 1 2 , where ω ≥ 0 and α ≥ 0 are the regularization parameters in policy and their theoretical values are given in Theorem 4 . We summarize the reward imputation using sketching and the sketched policy into Algorithm 2 , called SPUIR . Similarly , we call the updating of the original policy that uses reward imputation without sketching , the Policy Updating with Imputed Rewards ( PUIR ) . 1Since we set the number of blocks of SJLT as D < d , we omit D in the complexity analysis . Algorithm 2 Sketched Policy Updating with Imputed Rewards ( SPUIR ) in the ( n+ 1 ) -th episode INPUT : Policy p̃n , Dn+1 , A = { Aj } j∈ [ M ] , α ≥ 0 , η ∈ ( 0 , 1 ) , γ ∈ [ 0 , 1 ] , λ > 0 , W 0Aj = λId , G 0 Aj = Ĝ0Aj = Od , p 0 Aj = p̂0Aj = 0 , θ̃ 0 Aj = 0 , j ∈ [ M ] , batch size B , sketch size c , number of block D OUTPUT : Updated policy p̃n+1 1 : For all j ∈ [ M ] , store context vectors and rewards corresponding to the steps in which the action Aj is executed , into ΓnAj ∈ R Nnj ×d and ΛnAj ∈ R Nnj 2 : For all j ∈ [ M ] , store context vectors corresponding to the steps in which the action Aj is not executed into Γ̂nAj ∈ R N̂nj ×d , where N̂nj ← B −Nnj 3 : r̃n , b ( Aj ) ← 〈θ̃nAj , sn , b〉 , for all Aj ∈ A and b ∈ [ N̂ n j ] , where sn , b is the b-th row of Γ̂ n Aj 4 : Compute imputed reward vector R̂nAj ← { r̃n,1 ( Aj ) , . . . , r̃n , N̂nj ( Aj ) } ∈ R N̂nj for any j ∈ [ M ] 5 : for all action Aj ∈ A do 6 : Gn+1Aj ← G n Aj + ΓnᵀAjΓ n Aj , pn+1Aj ← p n Aj + ΓnᵀAjΛ n Aj { Eq . ( 7 ) } 7 : Ĝn+1Aj ← ηĜ n Aj + Γ̂nᵀAj Γ̂ n Aj , p̂n+1Aj ← ηp̂ n Aj + Γ̂nᵀAj Λ̂ n Aj { Eq . ( 8 ) } 8 : W n+1Aj ← λId +G n+1 Aj + γĜn+1Aj , θ̃ n+1 Aj ← ( W n+1Aj ) −1 ( pn+1Aj + γp̂ n+1 Aj ) { Eq . ( 6 ) } 9 : end for 10 : p̃n+1 ( s ) selects action A← argmaxA∈A〈θ̃ n+1 A , s〉+ α [ s ᵀ ( W n+1A ) −1 s ] 1 2 for a new context s 11 : return { θ̃n+1A } A∈A , { ( W n+1A ) −1 } A∈A | This paper addresses batched contextual bandits, with a fixed set of actions, and separate unknown parameters for each action. The authors propose an approach where the unobserved rewards (i.e. the rewards that would have been obtained if actions that hadn’t been selected for a given context) are imputed, and these imputed values are incorporated in to the regularization of the parameter estimates in a Lin-UCB-like algorithm. For reasons of computational efficiency, the authors also design a process to approximate this regularized estimator via a ‘sketching’ technique. For the approaches with and without sketching, the authors derive a $O(\sqrt{dMT})$ regret bound, where M is the number of actions, and d is the dimensionality of the parameter vectors, which broadly matches what is expected in the non-batched setting. They argue that the proposed algorithms have uniformly lower variance in the instantaneous regret than approaches without imputation and any increase in bias decreases exponentially quickly. Versions with time-adaptive parameters, and for non-linear functions are proposed without a full theoretical treatment, and all the proposed algorithms are shown to perform well in an empirical study. | SP:bf57f75331e1a69733f4573766af8862eca57804 |
Partial Information as Full: Reward Imputation with Sketching in Bandits | 1 INTRODUCTION . Contextual bandits have been widely used in real-world sequential decision-making problems ( Li et al. , 2010 ; Lan & Baraniuk , 2016 ; Yom-Tov et al. , 2017 ; Yang et al. , 2021 ) , where the agent updates the decision-making policy fully online ( i.e. , at each step ) according to the context and corresponding reward feedback so as to maximize the cumulative reward . In this paper , we consider a more complex setting—contextual batched bandits ( CBB ) , where the decision process is partitioned into N episodes , the agent interacts with the environment for B steps in one episode , collects the reward feedbacks and contexts at the end of the episode , and then updates the policy using the collected data for the next episode . CBB is more practical in some real-world applications , since updating the policy once receiving the reward feedback is rather unrealistic due to its high computational cost and decision instability . In bandit settings , it is inevitable that the environment only reveals the rewards of the executed actions to the agent as the feedbacks , while hiding the rewards of non-executed actions . We refer to this category of limited feedback as the partial-information feedback ( also called “ bandit feedback ” ) . Existing batched bandit approaches in the CBB setting discard the information contained in the potential rewards of the non-executed actions , address the problem of partial-information feedback using an exploitation-exploration tradeoff on the context space and reward space ( Han et al. , 2020 ; Zhang et al. , 2020 ) .But in the CBB setting , the agent usually estimates and maintains reward models for the action-selection policy , and the potential rewards of the non-executed actions have been somehow captured by the policy . This additional reward structure information is estimated and available in each episode , however , are not utilized by existing batched bandit approaches . In contextual bandit settings where the policy is updated fully online , several bias-correction approaches have been introduced to address the partial-information feedback . Dimakopoulou et al . ( 2019 ) presented linear contextual bandits integrating the balancing approach from causal inference , which reweight the contexts and rewards by the inverse propensity scores . Chou et al . ( 2015 ) designed pseudo-reward algorithms for contextual bandits , which use a direct method to estimate the unobserved rewards for the upper confidence bound ( UCB ) strategy . Kim & Paik ( 2019 ) focused on the feedback bias-correction for LASSO bandit with high-dimensional contexts , and applied the doubly-robust approach to the reward modification using average contexts . Although these approaches have been demonstrated to be effective in contextual bandit settings , little efforts have been spent to address the under-utilization of partial-information feedback in the CBB setting . Theoretical and experimental analyses in Section 2 indicate that better performance of CBB is achievable if the rewards of the non-executed actions can be received . Motivated by these observations , we propose a novel reward imputation approach for the non-executed actions , which mimics the reward generation mechanisms of environments . We conclude our contributions as follows . • To fully utilize feedback information in CBB , we formulate the reward imputation as a problem of imputation regularized ridge regression , where the policy can be updated efficiently using sketching . • We prove that our reward imputation approach obtains a relative-error bound for sketching approximation , achieves an instantaneous regret with a controllable bias and a smaller variance than that without reward imputation , has a lower bound of the sketch size independently of the overall number of steps , enjoys a sublinear regret bound against the optimal policy , and reduces the time complexity from O ( Bd2 ) to O ( cd2 ) for each action in one episode , where B denotes the batch size , c the sketch size , and d the dimensionality of inputs , satisfying d < c < B . • We present two practical variants of our reward imputation approach , including the rate-scheduled version in which the imputation rate is set without tuning , and the version for nonlinear rewards . • We carried out extensive experiments on the synthetic data , public benchmark , and the data collected from a real commercial product to demonstrate our performance , empirically analyzed the influence of different parameters , and verified the correctness of the theoretical results . Related Work . Recently , batched bandit has become an active research topic in statistics and learning theory including 2-armed bandit ( Perchet et al. , 2016 ) , multi-armed bandit ( Gao et al. , 2019 ; Zhang et al. , 2020 ; Wang & Cheng , 2020 ) , and contextual bandit ( Han et al. , 2020 ; Ren & Zhou , 2020 ; Gu et al. , 2021 ) . Han et al . ( 2020 ) defined linear contextual bandits , and designed UCB-type algorithms for both stochastic and adversarial contexts , where true rewards of different actions have the same parameters . Zhang et al . ( 2020 ) provided methods for inference on data collected in batches using bandits , and introduced a batched least squares estimator for both multi-arm and contextual bandits . Recently , Esfandiari et al . ( 2021 ) proved refined regret upper bounds of batched bandits in stochastic and adversarial settings . There are several recent works that consider similar settings to CBB , e.g. , episodic Markov decision process ( Jin et al. , 2018 ) , LASSO bandits ( Wang & Cheng , 2020 ) . Sketching is another related technology that compresses a large matrix to a much smaller one by multiplying a ( usually ) random matrix with certain properties ( Woodruff , 2014 ) , which has been used in online convex optimization ( Calandriello et al. , 2017 ; Zhang & Liao , 2019 ) . 2 PROBLEM FORMULATION AND ANALYSIS . First , we introduce some notations . Let [ x ] = { 1 , 2 , . . . , x } , S ⊆ Rd be the context space , A = { Aj } j∈ [ M ] the action space containing M actions , [ A ; B ] = [ Aᵀ , Bᵀ ] ᵀ , ‖A‖F , ‖A‖1 ‖A‖2 denote the Frobenius norm , 1-norm , and spectral norm of a matrix A , respectively , ‖a‖1 and ‖a‖2 be the ` 1-norm and the ` 2-norm of a vector a , σmin ( A ) and σmax ( A ) denote the minimum and maximum of the of singular values of A . In this paper , we focus on the setting of Contextual Batched Bandits ( CBB ) ( see Algorithm 1 ) , where the decision process is partitioned into N episodes , and in each episode , CBB consists of two phases : 1 ) the policy updating approximates the optimal policy based on the received contexts and rewards ; 2 ) the online decision selects actions for execution following the updated and fixed policy p for B steps ( also called the batch size is B ) , and stores the contextaction pairs and the observed rewards of the executed actions into a data buffer D. The reward R in CBB is a partial-information feedback where rewards are unobserved for the non-executed actions . Different from existing batch bandit setting ( Han et al. , 2020 ; Esfandiari et al. , 2021 ) , where the true reward feedbacks for all actions are controlled by the same parameter vector while the contexts received differ by actions at each step , we assume that in the CBB setting , the mechanism of true reward feedback differs by actions and the received context is shared by actions . Formally , for any context si ∈ S ⊆ Rd and action A ∈ A , we assume that the expectation of the true reward Rtruei , A Algorithm 1 Contextual Batched Bandit ( CBB ) INPUT : Batch size B , number of episodes N , action space A = { Aj } j∈ [ M ] , context space S ⊆ Rd 1 : Initialize policy p0 ← 1/M , sample data buffer D1 = { ( s0 , b , AI0 , b , R0 , b ) } b∈ [ B ] using initial policy p0 2 : for n = 1 to N do 3 : Update the policy pn on Dn { Policy Updating } 4 : for b = 1 to B do 5 : Observe context sn , b 6 : Choose AIn , b ∈ A following the updated policy pn ( sn , b ) { Online Decision } 7 : end for 8 : Dn+1 ← { ( sn , b , AIn , b , Rn , b } b∈ [ B ] , where Rn , b denotes the reward of action AIn , b on context sn , b 9 : end for is determined by an unknown action-specific reward parameter vector θ∗A ∈ Rd : E [ Rtruei , A | si ] = 〈θ∗A , si〉 ( the linear reward will be extended to the nonlinear case in Section 5 ) . This setting for reward feedback matches many real-world applications , e.g. , each action corresponds to a different category of candidate coupons in coupon recommendation , and the reward feedback mechanism of each category differs due to the different discount pricing strategies . Next , we provide deeper understandings of the influence of unobserved feedbacks on the performance of policy updating in the CBB setting . We first conducted an empirical comparison by applying the batch UCB policy ( Han et al. , 2020 ) to environments under different proportions of received reward feedbacks . In particular , the agent under full-information feedback can receive all the rewards of the executed and non-executed actions , called Full-Information CBB ( FICBB ) setting . From Figure 1 , we can observe that the partial-information feedbacks could be damaging in terms of hurting the policy updating , and batched bandit could benefit from more reward feedbacks , where the performance of 80 % feedback is very close to that of FI-CBB . Then , we demonstrate the difference of instantaneous regrets between the CBB and FI-CBB settings as in Theorem 1 . The detailed description and proof of Theorem 1 can be found in Appendix A. Theorem 1 . For any action A ∈ A and context si ∈ S , let θnA be the reward parameter vector estimated by the batched UCB policy in the nth episode . The upper bound of instantaneous regret ( defined by |〈θnA , si〉 − 〈θ∗A , si〉| ) in the FI-CBB setting is tighter than that in CBB setting ( i.e. , using the partial-information feedback ) . From Theorem 1 , we can conclude that the price paid for having access only to the partial-information feedbacks is the deterioration in the regret bound . Ideally , the policy would be updated using the fullinformation feedback . In CBB , however , the fullinformation feedback is unaccessible . Fortunately , in CBB , different reward parameter vectors need to be maintained and estimated separately for each action , and the potential reward structures of the non-executed actions have been somehow captured . So why don ’ t we use these maintained reward parameters to estimate the unknown rewards for the non-executed actions ? Next , we propose an efficient reward imputation approach that uses this additional reward structure information for improving the performance of the bandit policy . 3 EFFICIENT REWARD IMPUTATION FOR POLICY UPDATING . In this section , we present an efficient reward imputation approach tailored for policy updating in the CBB setting . Formulation of Reward Imputation . Since the true reward parameters differ by actions in the CBB setting , we need to maintain a different estimated reward parameter vector for each action . As shown in Figure 2 , in contrast to CBB that ignores the contexts and rewards of the non-executed steps of Aj , our reward imputation approach completes the missing values using the imputed contexts and rewards , approximating the full-information CBB setting . Specifically , in the ( n+1 ) -th episode , for each action Aj ∈ A , j ∈ [ M ] , we store context vectors and rewards corresponding to the steps in which the action Aj is executed , into a context matrix SnAj ∈ R Nnj ×d and a reward vector RnAj ∈ RN n j , respectively , whereNnj denotes the number of steps ( in episode n+1 ) in which the actionAj is executed . Additionally , for anyAj ∈ A , j ∈ [ M ] , we store context vectors corresponding to the nonexecuted steps of action Aj ( denote the number of non-executed steps by N̂nj , i.e. , N̂ n j = B−Nnj ) , into an imputed context matrix ŜnAj ∈ R N̂nj ×d , and compute the imputed reward vector as follows : R̂nAj = { rn,1 ( Aj ) , rn,2 ( Aj ) , . . . , rn , N̂nj ( Aj ) } ∈ R N̂nj , j ∈ [ M ] , where rn , b ( Aj ) : = 〈θ̄nAj , sn , b〉 denotes the imputed reward for each step b ∈ [ N̂ n j ] , and sn , b is the b-th row of ŜnAj . Then , we obtain several block matrices by concatenating the context and reward matrices from the previous episodes : LnAj = [ S 0 Aj ; · · · ; SnAj ] ∈ R Lnj ×d , T nAj = [ R 0 Aj ; · · · ; RnAj ] ∈ RL n j , Lnj = ∑n k=0N k j , L̂ n Aj = [ Ŝ0Aj ; · · · ; Ŝ n Aj ] ∈ RL̂ n j ×d , T̂ nAj = [ R̂ 0 Aj ; · · · ; R̂nAj ] ∈ R L̂nj , L̂nj =∑n k=0 N̂ k j . In the ( n + 1 ) -th episode , the estimated parameter vector θ̄ n+1 Aj of the imputed reward for action Aj can be updated by solving the following imputation regularized ridge regression : θ̄n+1Aj = arg min θ∈Rd ∥∥∥LnAjθ − T nAj∥∥∥2 2︸ ︷︷ ︸ Observed Term + γ ∥∥∥L̂nAjθ − T̂ nAj∥∥∥2 2︸ ︷︷ ︸ Imputation Term +λ‖θ‖22 , n = 0 , 1 , . . . , N − 1 , ( 1 ) where γ ∈ [ 0 , 1 ] is the imputation rate which controls the degree of reward imputation and measures a trade-off between bias and variance ( Remark 1 & 2 ) , λ > 0 is the regularization parameter , yielding θ̄n+1Aj = ( Ψn+1Aj ) −1 ( bn+1Aj + γb̂ n+1 Aj ) , ( 2 ) that can be obtained by the closed least squares solution , where Ψn+1Aj : = λId + Φ n+1 Aj + γΦ̂n+1Aj , Φn+1Aj = Φ n Aj + S nᵀ Aj SnAj , b n+1 Aj = bnAj + S nᵀ Aj RnAj , ( 3 ) Φ̂n+1Aj = ηΦ̂ n Aj + Ŝ nᵀ Aj ŜnAj , b̂ n+1 Aj = ηb̂nAj + Ŝ nᵀ Aj R̂nAj , ( 4 ) and η ∈ ( 0 , 1 ) is the discount parameter which controls how fast the previous imputed rewards are forgotten , and can help guaranteeing the regret bound in Theorem 2 . Efficient Reward Imputation using Sketching . As shown in the first 4 columns in Table 1 , the overall time complexity of the imputation for each action is O ( Bd2 ) in each episode , where B represents the batch size , and d the dimensionality of the input . Thus , for all the M actions in one episode , reward imputation increases the time complexity from O ( Bd2 ) of the approach without imputation to O ( MBd2 ) . To address this issue , we design an efficient reward imputation approach using sketching , which reduces the time complexity of each action in one episode from O ( Bd2 ) to O ( cd2 ) , where c denotes the sketch size satisfying d < c < B and cd > B . Specifically , in the ( n+ 1 ) -th episode , the imputation regularized ridge regression Eq . ( 1 ) can be approximated by a sketched ridge regression as θ̃n+1Aj = arg min θ∈Rd ∥∥∥ΠnAj ( LnAjθ − T nAj ) ∥∥∥2 2 + γ ∥∥∥Π̂nAj ( L̂nAjθ − T̂ nAj ) ∥∥∥2 2 + λ‖θ‖22 , ( 5 ) where θ̃n+1A denotes the estimated parameter vector of the imputed reward using sketching for action A ∈ A , CnAj ∈ R c×Nnj and ĈnAj ∈ R c×N̂nj are the sketch submatrices for the observed term and the imputation term , respectively , and the sketch matrices for the two terms can be represented as ΠnAj = [ C0Aj , C 1 Aj , · · · , C n Aj ] ∈ Rc×L n j , Π̂nAj = [ Ĉ0Aj , Ĉ 1 Aj , · · · , Ĉ n Aj ] ∈ Rc×L̂ n j . We denote the sketches of the context matrix and the reward vector by ΓnAj : = C n Aj SnAj ∈ R c×d and ΛnAj : = C n Aj RnAj ∈ R c , the sketches of the imputed context matrix and the imputed reward vector by Γ̂nAj : = Ĉ n Aj ŜnAj ∈ R c×d and Λ̂nAj : = Ĉ n Aj R̂nAj ∈ R c , and obtain the solution of Eq . ( 5 ) : θ̃n+1Aj = ( W n+1Aj ) −1 ( pn+1Aj + γp̂ n+1 Aj ) , ( 6 ) where η ∈ ( 0 , 1 ) denotes the discount parameter , W n+1Aj : = λId +G n+1 Aj + γĜn+1Aj , and Gn+1Aj = G n Aj + Γ nᵀ Aj ΓnAj , p n+1 Aj = pnAj + Γ nᵀ Aj ΛnAj , ( 7 ) Ĝn+1Aj = ηĜ n Aj + Γ̂ nᵀ Aj Γ̂nAj , p̂ n+1 Aj = ηp̂nAj + Γ̂ nᵀ Aj Λ̂nAj . ( 8 ) Using the parameter θ̃n+1Aj , we obtain the sketched version of imputed reward as r̃n , b ( Aj ) : = 〈θ̃nAj , sn , b〉 at step b ∈ [ N̂ n j ] . Finally , we specify that the sketch submatrices { CnA } A∈A , n∈ [ N ] and { ĈnAj } A∈A , n∈ [ N ] are the block construction of Sparser Johnson-Lindenstrauss Transform ( SJLT ) ( Kane & Nelson , 2014 ) , where the sketch size c is divisible by the number of blocks D1 . As shown in the last 4 columns in Table 1 , sketching reduces the time complexity fromO ( MBd2 ) toO ( Mcd2 ) for reward imputation of all M actions in one episode , where c < B . When Mc ≈ B , the overall time complexity of our reward imputation using sketching is even comparable to that without reward imputation which has a O ( Bd2 ) time complexity . Updated Policy using Imputed Rewards . Inspired by the UCB strategy ( Li et al. , 2010 ) , the updated policy for online decision of the ( n+ 1 ) -th episode can be formulated using the imputed rewards ( parameterized by θ̄n+1A in Eq . ( 2 ) ) or the sketched version of imputed rewards ( parameterized by θ̃n+1A in Eq. ( 6 ) ) . Specifically , for a new context s , origin policy p̄n+1 selects the action following A← arg max A∈A 〈θ̄n+1A , s〉+ ω [ sᵀ ( Ψ n+1 A ) −1s ] 1 2 , sketched policy p̃n+1 selects the action following A ← arg max A∈A 〈θ̃n+1A , s〉 + α [ sᵀ ( W n+1 A ) −1s ] 1 2 , where ω ≥ 0 and α ≥ 0 are the regularization parameters in policy and their theoretical values are given in Theorem 4 . We summarize the reward imputation using sketching and the sketched policy into Algorithm 2 , called SPUIR . Similarly , we call the updating of the original policy that uses reward imputation without sketching , the Policy Updating with Imputed Rewards ( PUIR ) . 1Since we set the number of blocks of SJLT as D < d , we omit D in the complexity analysis . Algorithm 2 Sketched Policy Updating with Imputed Rewards ( SPUIR ) in the ( n+ 1 ) -th episode INPUT : Policy p̃n , Dn+1 , A = { Aj } j∈ [ M ] , α ≥ 0 , η ∈ ( 0 , 1 ) , γ ∈ [ 0 , 1 ] , λ > 0 , W 0Aj = λId , G 0 Aj = Ĝ0Aj = Od , p 0 Aj = p̂0Aj = 0 , θ̃ 0 Aj = 0 , j ∈ [ M ] , batch size B , sketch size c , number of block D OUTPUT : Updated policy p̃n+1 1 : For all j ∈ [ M ] , store context vectors and rewards corresponding to the steps in which the action Aj is executed , into ΓnAj ∈ R Nnj ×d and ΛnAj ∈ R Nnj 2 : For all j ∈ [ M ] , store context vectors corresponding to the steps in which the action Aj is not executed into Γ̂nAj ∈ R N̂nj ×d , where N̂nj ← B −Nnj 3 : r̃n , b ( Aj ) ← 〈θ̃nAj , sn , b〉 , for all Aj ∈ A and b ∈ [ N̂ n j ] , where sn , b is the b-th row of Γ̂ n Aj 4 : Compute imputed reward vector R̂nAj ← { r̃n,1 ( Aj ) , . . . , r̃n , N̂nj ( Aj ) } ∈ R N̂nj for any j ∈ [ M ] 5 : for all action Aj ∈ A do 6 : Gn+1Aj ← G n Aj + ΓnᵀAjΓ n Aj , pn+1Aj ← p n Aj + ΓnᵀAjΛ n Aj { Eq . ( 7 ) } 7 : Ĝn+1Aj ← ηĜ n Aj + Γ̂nᵀAj Γ̂ n Aj , p̂n+1Aj ← ηp̂ n Aj + Γ̂nᵀAj Λ̂ n Aj { Eq . ( 8 ) } 8 : W n+1Aj ← λId +G n+1 Aj + γĜn+1Aj , θ̃ n+1 Aj ← ( W n+1Aj ) −1 ( pn+1Aj + γp̂ n+1 Aj ) { Eq . ( 6 ) } 9 : end for 10 : p̃n+1 ( s ) selects action A← argmaxA∈A〈θ̃ n+1 A , s〉+ α [ s ᵀ ( W n+1A ) −1 s ] 1 2 for a new context s 11 : return { θ̃n+1A } A∈A , { ( W n+1A ) −1 } A∈A | The paper considers the contextual batched bandit setting and introduces the idea of imputation utilizing the non-executed actions in each batch. This provides better regret properties than without and also further speedup is provided by considering the sketch version. Theoretical results in terms of sketching performance are also provided such as going down from O(Bd^2) to O(cd^2) for sketch size c as well as the regret bounds of the sketched approach SPUIR. Experimental results are shown on a couple of datasets to showcase the improved performance over state-of-the-art batched bandit algorithms such as BEXP3, BLTS-B. | SP:bf57f75331e1a69733f4573766af8862eca57804 |
Partial Information as Full: Reward Imputation with Sketching in Bandits | 1 INTRODUCTION . Contextual bandits have been widely used in real-world sequential decision-making problems ( Li et al. , 2010 ; Lan & Baraniuk , 2016 ; Yom-Tov et al. , 2017 ; Yang et al. , 2021 ) , where the agent updates the decision-making policy fully online ( i.e. , at each step ) according to the context and corresponding reward feedback so as to maximize the cumulative reward . In this paper , we consider a more complex setting—contextual batched bandits ( CBB ) , where the decision process is partitioned into N episodes , the agent interacts with the environment for B steps in one episode , collects the reward feedbacks and contexts at the end of the episode , and then updates the policy using the collected data for the next episode . CBB is more practical in some real-world applications , since updating the policy once receiving the reward feedback is rather unrealistic due to its high computational cost and decision instability . In bandit settings , it is inevitable that the environment only reveals the rewards of the executed actions to the agent as the feedbacks , while hiding the rewards of non-executed actions . We refer to this category of limited feedback as the partial-information feedback ( also called “ bandit feedback ” ) . Existing batched bandit approaches in the CBB setting discard the information contained in the potential rewards of the non-executed actions , address the problem of partial-information feedback using an exploitation-exploration tradeoff on the context space and reward space ( Han et al. , 2020 ; Zhang et al. , 2020 ) .But in the CBB setting , the agent usually estimates and maintains reward models for the action-selection policy , and the potential rewards of the non-executed actions have been somehow captured by the policy . This additional reward structure information is estimated and available in each episode , however , are not utilized by existing batched bandit approaches . In contextual bandit settings where the policy is updated fully online , several bias-correction approaches have been introduced to address the partial-information feedback . Dimakopoulou et al . ( 2019 ) presented linear contextual bandits integrating the balancing approach from causal inference , which reweight the contexts and rewards by the inverse propensity scores . Chou et al . ( 2015 ) designed pseudo-reward algorithms for contextual bandits , which use a direct method to estimate the unobserved rewards for the upper confidence bound ( UCB ) strategy . Kim & Paik ( 2019 ) focused on the feedback bias-correction for LASSO bandit with high-dimensional contexts , and applied the doubly-robust approach to the reward modification using average contexts . Although these approaches have been demonstrated to be effective in contextual bandit settings , little efforts have been spent to address the under-utilization of partial-information feedback in the CBB setting . Theoretical and experimental analyses in Section 2 indicate that better performance of CBB is achievable if the rewards of the non-executed actions can be received . Motivated by these observations , we propose a novel reward imputation approach for the non-executed actions , which mimics the reward generation mechanisms of environments . We conclude our contributions as follows . • To fully utilize feedback information in CBB , we formulate the reward imputation as a problem of imputation regularized ridge regression , where the policy can be updated efficiently using sketching . • We prove that our reward imputation approach obtains a relative-error bound for sketching approximation , achieves an instantaneous regret with a controllable bias and a smaller variance than that without reward imputation , has a lower bound of the sketch size independently of the overall number of steps , enjoys a sublinear regret bound against the optimal policy , and reduces the time complexity from O ( Bd2 ) to O ( cd2 ) for each action in one episode , where B denotes the batch size , c the sketch size , and d the dimensionality of inputs , satisfying d < c < B . • We present two practical variants of our reward imputation approach , including the rate-scheduled version in which the imputation rate is set without tuning , and the version for nonlinear rewards . • We carried out extensive experiments on the synthetic data , public benchmark , and the data collected from a real commercial product to demonstrate our performance , empirically analyzed the influence of different parameters , and verified the correctness of the theoretical results . Related Work . Recently , batched bandit has become an active research topic in statistics and learning theory including 2-armed bandit ( Perchet et al. , 2016 ) , multi-armed bandit ( Gao et al. , 2019 ; Zhang et al. , 2020 ; Wang & Cheng , 2020 ) , and contextual bandit ( Han et al. , 2020 ; Ren & Zhou , 2020 ; Gu et al. , 2021 ) . Han et al . ( 2020 ) defined linear contextual bandits , and designed UCB-type algorithms for both stochastic and adversarial contexts , where true rewards of different actions have the same parameters . Zhang et al . ( 2020 ) provided methods for inference on data collected in batches using bandits , and introduced a batched least squares estimator for both multi-arm and contextual bandits . Recently , Esfandiari et al . ( 2021 ) proved refined regret upper bounds of batched bandits in stochastic and adversarial settings . There are several recent works that consider similar settings to CBB , e.g. , episodic Markov decision process ( Jin et al. , 2018 ) , LASSO bandits ( Wang & Cheng , 2020 ) . Sketching is another related technology that compresses a large matrix to a much smaller one by multiplying a ( usually ) random matrix with certain properties ( Woodruff , 2014 ) , which has been used in online convex optimization ( Calandriello et al. , 2017 ; Zhang & Liao , 2019 ) . 2 PROBLEM FORMULATION AND ANALYSIS . First , we introduce some notations . Let [ x ] = { 1 , 2 , . . . , x } , S ⊆ Rd be the context space , A = { Aj } j∈ [ M ] the action space containing M actions , [ A ; B ] = [ Aᵀ , Bᵀ ] ᵀ , ‖A‖F , ‖A‖1 ‖A‖2 denote the Frobenius norm , 1-norm , and spectral norm of a matrix A , respectively , ‖a‖1 and ‖a‖2 be the ` 1-norm and the ` 2-norm of a vector a , σmin ( A ) and σmax ( A ) denote the minimum and maximum of the of singular values of A . In this paper , we focus on the setting of Contextual Batched Bandits ( CBB ) ( see Algorithm 1 ) , where the decision process is partitioned into N episodes , and in each episode , CBB consists of two phases : 1 ) the policy updating approximates the optimal policy based on the received contexts and rewards ; 2 ) the online decision selects actions for execution following the updated and fixed policy p for B steps ( also called the batch size is B ) , and stores the contextaction pairs and the observed rewards of the executed actions into a data buffer D. The reward R in CBB is a partial-information feedback where rewards are unobserved for the non-executed actions . Different from existing batch bandit setting ( Han et al. , 2020 ; Esfandiari et al. , 2021 ) , where the true reward feedbacks for all actions are controlled by the same parameter vector while the contexts received differ by actions at each step , we assume that in the CBB setting , the mechanism of true reward feedback differs by actions and the received context is shared by actions . Formally , for any context si ∈ S ⊆ Rd and action A ∈ A , we assume that the expectation of the true reward Rtruei , A Algorithm 1 Contextual Batched Bandit ( CBB ) INPUT : Batch size B , number of episodes N , action space A = { Aj } j∈ [ M ] , context space S ⊆ Rd 1 : Initialize policy p0 ← 1/M , sample data buffer D1 = { ( s0 , b , AI0 , b , R0 , b ) } b∈ [ B ] using initial policy p0 2 : for n = 1 to N do 3 : Update the policy pn on Dn { Policy Updating } 4 : for b = 1 to B do 5 : Observe context sn , b 6 : Choose AIn , b ∈ A following the updated policy pn ( sn , b ) { Online Decision } 7 : end for 8 : Dn+1 ← { ( sn , b , AIn , b , Rn , b } b∈ [ B ] , where Rn , b denotes the reward of action AIn , b on context sn , b 9 : end for is determined by an unknown action-specific reward parameter vector θ∗A ∈ Rd : E [ Rtruei , A | si ] = 〈θ∗A , si〉 ( the linear reward will be extended to the nonlinear case in Section 5 ) . This setting for reward feedback matches many real-world applications , e.g. , each action corresponds to a different category of candidate coupons in coupon recommendation , and the reward feedback mechanism of each category differs due to the different discount pricing strategies . Next , we provide deeper understandings of the influence of unobserved feedbacks on the performance of policy updating in the CBB setting . We first conducted an empirical comparison by applying the batch UCB policy ( Han et al. , 2020 ) to environments under different proportions of received reward feedbacks . In particular , the agent under full-information feedback can receive all the rewards of the executed and non-executed actions , called Full-Information CBB ( FICBB ) setting . From Figure 1 , we can observe that the partial-information feedbacks could be damaging in terms of hurting the policy updating , and batched bandit could benefit from more reward feedbacks , where the performance of 80 % feedback is very close to that of FI-CBB . Then , we demonstrate the difference of instantaneous regrets between the CBB and FI-CBB settings as in Theorem 1 . The detailed description and proof of Theorem 1 can be found in Appendix A. Theorem 1 . For any action A ∈ A and context si ∈ S , let θnA be the reward parameter vector estimated by the batched UCB policy in the nth episode . The upper bound of instantaneous regret ( defined by |〈θnA , si〉 − 〈θ∗A , si〉| ) in the FI-CBB setting is tighter than that in CBB setting ( i.e. , using the partial-information feedback ) . From Theorem 1 , we can conclude that the price paid for having access only to the partial-information feedbacks is the deterioration in the regret bound . Ideally , the policy would be updated using the fullinformation feedback . In CBB , however , the fullinformation feedback is unaccessible . Fortunately , in CBB , different reward parameter vectors need to be maintained and estimated separately for each action , and the potential reward structures of the non-executed actions have been somehow captured . So why don ’ t we use these maintained reward parameters to estimate the unknown rewards for the non-executed actions ? Next , we propose an efficient reward imputation approach that uses this additional reward structure information for improving the performance of the bandit policy . 3 EFFICIENT REWARD IMPUTATION FOR POLICY UPDATING . In this section , we present an efficient reward imputation approach tailored for policy updating in the CBB setting . Formulation of Reward Imputation . Since the true reward parameters differ by actions in the CBB setting , we need to maintain a different estimated reward parameter vector for each action . As shown in Figure 2 , in contrast to CBB that ignores the contexts and rewards of the non-executed steps of Aj , our reward imputation approach completes the missing values using the imputed contexts and rewards , approximating the full-information CBB setting . Specifically , in the ( n+1 ) -th episode , for each action Aj ∈ A , j ∈ [ M ] , we store context vectors and rewards corresponding to the steps in which the action Aj is executed , into a context matrix SnAj ∈ R Nnj ×d and a reward vector RnAj ∈ RN n j , respectively , whereNnj denotes the number of steps ( in episode n+1 ) in which the actionAj is executed . Additionally , for anyAj ∈ A , j ∈ [ M ] , we store context vectors corresponding to the nonexecuted steps of action Aj ( denote the number of non-executed steps by N̂nj , i.e. , N̂ n j = B−Nnj ) , into an imputed context matrix ŜnAj ∈ R N̂nj ×d , and compute the imputed reward vector as follows : R̂nAj = { rn,1 ( Aj ) , rn,2 ( Aj ) , . . . , rn , N̂nj ( Aj ) } ∈ R N̂nj , j ∈ [ M ] , where rn , b ( Aj ) : = 〈θ̄nAj , sn , b〉 denotes the imputed reward for each step b ∈ [ N̂ n j ] , and sn , b is the b-th row of ŜnAj . Then , we obtain several block matrices by concatenating the context and reward matrices from the previous episodes : LnAj = [ S 0 Aj ; · · · ; SnAj ] ∈ R Lnj ×d , T nAj = [ R 0 Aj ; · · · ; RnAj ] ∈ RL n j , Lnj = ∑n k=0N k j , L̂ n Aj = [ Ŝ0Aj ; · · · ; Ŝ n Aj ] ∈ RL̂ n j ×d , T̂ nAj = [ R̂ 0 Aj ; · · · ; R̂nAj ] ∈ R L̂nj , L̂nj =∑n k=0 N̂ k j . In the ( n + 1 ) -th episode , the estimated parameter vector θ̄ n+1 Aj of the imputed reward for action Aj can be updated by solving the following imputation regularized ridge regression : θ̄n+1Aj = arg min θ∈Rd ∥∥∥LnAjθ − T nAj∥∥∥2 2︸ ︷︷ ︸ Observed Term + γ ∥∥∥L̂nAjθ − T̂ nAj∥∥∥2 2︸ ︷︷ ︸ Imputation Term +λ‖θ‖22 , n = 0 , 1 , . . . , N − 1 , ( 1 ) where γ ∈ [ 0 , 1 ] is the imputation rate which controls the degree of reward imputation and measures a trade-off between bias and variance ( Remark 1 & 2 ) , λ > 0 is the regularization parameter , yielding θ̄n+1Aj = ( Ψn+1Aj ) −1 ( bn+1Aj + γb̂ n+1 Aj ) , ( 2 ) that can be obtained by the closed least squares solution , where Ψn+1Aj : = λId + Φ n+1 Aj + γΦ̂n+1Aj , Φn+1Aj = Φ n Aj + S nᵀ Aj SnAj , b n+1 Aj = bnAj + S nᵀ Aj RnAj , ( 3 ) Φ̂n+1Aj = ηΦ̂ n Aj + Ŝ nᵀ Aj ŜnAj , b̂ n+1 Aj = ηb̂nAj + Ŝ nᵀ Aj R̂nAj , ( 4 ) and η ∈ ( 0 , 1 ) is the discount parameter which controls how fast the previous imputed rewards are forgotten , and can help guaranteeing the regret bound in Theorem 2 . Efficient Reward Imputation using Sketching . As shown in the first 4 columns in Table 1 , the overall time complexity of the imputation for each action is O ( Bd2 ) in each episode , where B represents the batch size , and d the dimensionality of the input . Thus , for all the M actions in one episode , reward imputation increases the time complexity from O ( Bd2 ) of the approach without imputation to O ( MBd2 ) . To address this issue , we design an efficient reward imputation approach using sketching , which reduces the time complexity of each action in one episode from O ( Bd2 ) to O ( cd2 ) , where c denotes the sketch size satisfying d < c < B and cd > B . Specifically , in the ( n+ 1 ) -th episode , the imputation regularized ridge regression Eq . ( 1 ) can be approximated by a sketched ridge regression as θ̃n+1Aj = arg min θ∈Rd ∥∥∥ΠnAj ( LnAjθ − T nAj ) ∥∥∥2 2 + γ ∥∥∥Π̂nAj ( L̂nAjθ − T̂ nAj ) ∥∥∥2 2 + λ‖θ‖22 , ( 5 ) where θ̃n+1A denotes the estimated parameter vector of the imputed reward using sketching for action A ∈ A , CnAj ∈ R c×Nnj and ĈnAj ∈ R c×N̂nj are the sketch submatrices for the observed term and the imputation term , respectively , and the sketch matrices for the two terms can be represented as ΠnAj = [ C0Aj , C 1 Aj , · · · , C n Aj ] ∈ Rc×L n j , Π̂nAj = [ Ĉ0Aj , Ĉ 1 Aj , · · · , Ĉ n Aj ] ∈ Rc×L̂ n j . We denote the sketches of the context matrix and the reward vector by ΓnAj : = C n Aj SnAj ∈ R c×d and ΛnAj : = C n Aj RnAj ∈ R c , the sketches of the imputed context matrix and the imputed reward vector by Γ̂nAj : = Ĉ n Aj ŜnAj ∈ R c×d and Λ̂nAj : = Ĉ n Aj R̂nAj ∈ R c , and obtain the solution of Eq . ( 5 ) : θ̃n+1Aj = ( W n+1Aj ) −1 ( pn+1Aj + γp̂ n+1 Aj ) , ( 6 ) where η ∈ ( 0 , 1 ) denotes the discount parameter , W n+1Aj : = λId +G n+1 Aj + γĜn+1Aj , and Gn+1Aj = G n Aj + Γ nᵀ Aj ΓnAj , p n+1 Aj = pnAj + Γ nᵀ Aj ΛnAj , ( 7 ) Ĝn+1Aj = ηĜ n Aj + Γ̂ nᵀ Aj Γ̂nAj , p̂ n+1 Aj = ηp̂nAj + Γ̂ nᵀ Aj Λ̂nAj . ( 8 ) Using the parameter θ̃n+1Aj , we obtain the sketched version of imputed reward as r̃n , b ( Aj ) : = 〈θ̃nAj , sn , b〉 at step b ∈ [ N̂ n j ] . Finally , we specify that the sketch submatrices { CnA } A∈A , n∈ [ N ] and { ĈnAj } A∈A , n∈ [ N ] are the block construction of Sparser Johnson-Lindenstrauss Transform ( SJLT ) ( Kane & Nelson , 2014 ) , where the sketch size c is divisible by the number of blocks D1 . As shown in the last 4 columns in Table 1 , sketching reduces the time complexity fromO ( MBd2 ) toO ( Mcd2 ) for reward imputation of all M actions in one episode , where c < B . When Mc ≈ B , the overall time complexity of our reward imputation using sketching is even comparable to that without reward imputation which has a O ( Bd2 ) time complexity . Updated Policy using Imputed Rewards . Inspired by the UCB strategy ( Li et al. , 2010 ) , the updated policy for online decision of the ( n+ 1 ) -th episode can be formulated using the imputed rewards ( parameterized by θ̄n+1A in Eq . ( 2 ) ) or the sketched version of imputed rewards ( parameterized by θ̃n+1A in Eq. ( 6 ) ) . Specifically , for a new context s , origin policy p̄n+1 selects the action following A← arg max A∈A 〈θ̄n+1A , s〉+ ω [ sᵀ ( Ψ n+1 A ) −1s ] 1 2 , sketched policy p̃n+1 selects the action following A ← arg max A∈A 〈θ̃n+1A , s〉 + α [ sᵀ ( W n+1 A ) −1s ] 1 2 , where ω ≥ 0 and α ≥ 0 are the regularization parameters in policy and their theoretical values are given in Theorem 4 . We summarize the reward imputation using sketching and the sketched policy into Algorithm 2 , called SPUIR . Similarly , we call the updating of the original policy that uses reward imputation without sketching , the Policy Updating with Imputed Rewards ( PUIR ) . 1Since we set the number of blocks of SJLT as D < d , we omit D in the complexity analysis . Algorithm 2 Sketched Policy Updating with Imputed Rewards ( SPUIR ) in the ( n+ 1 ) -th episode INPUT : Policy p̃n , Dn+1 , A = { Aj } j∈ [ M ] , α ≥ 0 , η ∈ ( 0 , 1 ) , γ ∈ [ 0 , 1 ] , λ > 0 , W 0Aj = λId , G 0 Aj = Ĝ0Aj = Od , p 0 Aj = p̂0Aj = 0 , θ̃ 0 Aj = 0 , j ∈ [ M ] , batch size B , sketch size c , number of block D OUTPUT : Updated policy p̃n+1 1 : For all j ∈ [ M ] , store context vectors and rewards corresponding to the steps in which the action Aj is executed , into ΓnAj ∈ R Nnj ×d and ΛnAj ∈ R Nnj 2 : For all j ∈ [ M ] , store context vectors corresponding to the steps in which the action Aj is not executed into Γ̂nAj ∈ R N̂nj ×d , where N̂nj ← B −Nnj 3 : r̃n , b ( Aj ) ← 〈θ̃nAj , sn , b〉 , for all Aj ∈ A and b ∈ [ N̂ n j ] , where sn , b is the b-th row of Γ̂ n Aj 4 : Compute imputed reward vector R̂nAj ← { r̃n,1 ( Aj ) , . . . , r̃n , N̂nj ( Aj ) } ∈ R N̂nj for any j ∈ [ M ] 5 : for all action Aj ∈ A do 6 : Gn+1Aj ← G n Aj + ΓnᵀAjΓ n Aj , pn+1Aj ← p n Aj + ΓnᵀAjΛ n Aj { Eq . ( 7 ) } 7 : Ĝn+1Aj ← ηĜ n Aj + Γ̂nᵀAj Γ̂ n Aj , p̂n+1Aj ← ηp̂ n Aj + Γ̂nᵀAj Λ̂ n Aj { Eq . ( 8 ) } 8 : W n+1Aj ← λId +G n+1 Aj + γĜn+1Aj , θ̃ n+1 Aj ← ( W n+1Aj ) −1 ( pn+1Aj + γp̂ n+1 Aj ) { Eq . ( 6 ) } 9 : end for 10 : p̃n+1 ( s ) selects action A← argmaxA∈A〈θ̃ n+1 A , s〉+ α [ s ᵀ ( W n+1A ) −1 s ] 1 2 for a new context s 11 : return { θ̃n+1A } A∈A , { ( W n+1A ) −1 } A∈A | The paper considers the behavior of a series of algorithms for learning in a batched bandit setting, where parameters are arm/action specific, rather than shared across arms/actions. These algorithms are based on UCB like techniques with the addition of a regularization term that aims to use information from imputed, unobserved rewards. Such techniques are present in the linear regression and dimensionality reduction literature where one aims to use either prior information or data distribution information to aid prediction. However, to my knowledge, these formalisms are quite new to online settings like bandits, thus making the presented approach appealing. | SP:bf57f75331e1a69733f4573766af8862eca57804 |
Hybrid Memoised Wake-Sleep: Approximate Inference at the Discrete-Continuous Interface | 1 INTRODUCTION . We naturally understand the world around us in terms of discrete symbols . When looking at a scene , we automatically parse what is where , and understand the relationships between objects in the scene . We understand that there is a book and a table , and that the book is on the table . Such symbolic representations are often necessary for planning , communication and abstract reasoning . They allow the specification of goals like “ book on shelf ” or preconditions like the fact that “ book in hand ” must come before “ move hand to shelf ” , both of which are necessary for high-level planning . In communication , we ’ re forced to describe such plans , among many other things we say , in discrete words . And further , abstract reasoning requires creating new symbols by composing old ones which allows us to generalize to completely new situations . A “ tower ” structure made out of books is still understood as a “ tower ” when built out of Jenga blocks . How do we represent and learn models that support such symbolic reasoning while supporting efficient inference ? We focus on a particular class of hybrid generative models pθ ( zd , zc , x ) of observed data x with discrete latent variables zd , continuous latent variables zc and learnable parameters θ with a graphical model structure shown in Figure 1 . In particular , the discrete latent variables zd represent an underlying structure present in the data , while the remaining continuous latent variables zc represent non-structural quantities . For example , in the context of compositional scene understanding , zd can represent a scene graph comprising the object identities and the relationships between them , like “ a small green pyramid is on top of a yellow cube ; a blue doughnut leans on the yellow cube ; the large pyramid is next to the yellow cube ” while zc represents the continuous poses of these objects . In this model , we assume that object identities are discrete , symbolic variables indexing into a set of learnable primitives , parameterized by a subset of the generative model parameters θ . The idea is to make these primitives learn to represent concrete objects like “ yellow cube ” or “ large green pyramid ” from data in an unsupervised fashion . Algorithms suitable for learning such models are based on variational inference or wake-sleep . However , these algorithms are either inefficient or inapplicable to general settings . First , stochastic variational inference methods that optimize the evidence lower bound ( ELBO ) using the reparameterization trick ( Kingma & Welling , 2014 ; Rezende & Mohamed , 2015 ) are not applicable to discrete kernel : WN ( 0.49 ) + SE ( 0.49 , 0.50 ) SE ( 0.50 , 0.51 ) + PerL ( 0.50 , 1.00 , 0.50 ) PerXL ( 0.49 , 1.50 , 0.50 ) + WN ( 0.49 ) ( a ) Trends in time-series : Learning Gaussian process ( GP ) kernel to fit data ( blue ) . Extrapolation ( orange ) shown with inferred kernel expression ( below ) . latent variable models . The REINFORCE gradient estimator ( Williams , 1992 ) , on the other hand , has high variance and continuous relaxations of discrete variables ( Jang et al. , 2017 ; Maddison et al. , 2016 ) don ’ t naturally apply to stochastic control flow models ( Le et al. , 2019 ) . Second , wake-sleep methods ( Hinton et al. , 1995 ; Dayan et al. , 1995 ) like reweighted wake-sleep ( RWS ) ( Bornschein & Bengio , 2015 ; Le et al. , 2019 ) require inferring discrete latent variables at every learning iteration , without saving previously performed inferences . Memoised wake-sleep ( MWS ) ( Hewitt et al. , 2020 ) addresses this issue , but is only applicable to purely discrete latent variable models . We propose hybrid memoised wake-sleep ( HMWS ) —a method for learning and amortized inference in probabilistic generative models with hybrid discrete-continuous latent variables . HMWS combines the strengths of MWS in memoising the discrete latent variables and RWS in handling continuous latent variables . The core idea in HMWS is to memoise discrete latent variables zd and learn a separate recognition model which is used for importance-sampling based approximate inference and marginalization of the continuous latent variables zc . We empirically compare HMWS with state-of-the-art ( i ) stochastic variational inference methods that use control variates to reduce the REINFORCE gradient variance , VIMCO ( Mnih & Rezende , 2016 ) , and ( ii ) a wake-sleep extension , RWS . We show that HMWS outperforms these baselines in two domains : structured time series and compositional 3D scene understanding , respectively . 2 BACKGROUND . Our goal is to learn the parameters θ of a generative model pθ ( z , x ) of latent variables z and data x , and parameters φ of a recognition model qφ ( z|x ) which acts as an approximation to the posterior pθ ( z|x ) . This can be achieved by maximizing the evidence lower bound ELBO ( x , pθ ( z , x ) , qφ ( z|x ) ) = log pθ ( x ) − KL ( qφ ( z|x ) ||pθ ( z|x ) ) ( 1 ) which maximizes the evidence log pθ ( x ) while minimizing the Kullback-Leibler ( KL ) divergence , thereby encouraging the recognition model to approximate the posterior . If the latent variables are discrete , a standard way to estimate the gradients of the ELBO with respect to the recognition model parameters involves using the REINFORCE ( or score function ) gradient estimator ( Williams , 1992 ; Schulman et al. , 2015 ) ∇φELBO ( x , pθ ( z , x ) , qφ ( z|x ) ) ≈ log pθ ( z , x ) qφ ( z|x ) · ∇φ log qφ ( z|x ) +∇φ log pθ ( z , x ) qφ ( z|x ) ( 2 ) where z ∼ qφ ( z|x ) . However , the first term is often high-variance which makes learning inefficient . This issue can be addressed by ( i ) introducing control variates ( Mnih & Gregor , 2014 ; Mnih & Rezende , 2016 ; Tucker et al. , 2017 ; Grathwohl et al. , 2018 ) which reduce gradient variance , ( ii ) continuous-relaxation of discrete latent variables to allow differentiation ( Jang et al. , 2017 ; Maddison et al. , 2016 ) , or ( iii ) introducing a separate “ wake-sleep ” objective for learning the recognition model that is trained in different steps and sidesteps the need to differentiate through discrete latent variables ( Hinton et al. , 1995 ; Dayan et al. , 1995 ; Bornschein & Bengio , 2015 ; Le et al. , 2019 ) . 2.1 MEMOISED WAKE-SLEEP . Both ELBO-based and wake-sleep based approaches to learning require re-solving the inference task by sampling from the recognition model qφ ( z|x ) at every iteration . This repeated sampling can be wasteful , especially when only a few latent configurations explain the data well . Memoised wakesleep ( MWS ) ( Hewitt et al. , 2020 ) extends wake-sleep by introducing a memory—a set of M unique discrete latent variables { zmd } Mm=1 for each data point x which induces a variational distribution qMEM ( zd|x ) = M∑ m=1 ωmδzmd ( zd ) , ( 3 ) consisting of a weighted set of delta masses δzmd centered on the memory elements ( see also Saeedi et al . ( 2017 ) ) . This variational distribution is proven to improve the evidence lower bound ELBO ( x , pθ ( zd , x ) , qMEM ( zd|x ) ) ( see Sec . 3 of ( Hewitt et al. , 2020 ) ) by a memory-update phase comprising ( i ) the proposal of N new values { z′nd } Nn=1 ∼ qφ ( zd|x ) , ( ii ) retaining the best M values from the union of the old memory elements and newly proposed values { zmd } Mm=1 ∪ { z′ n d } Nn=1 scored by pθ ( zd , x ) , and ( iii ) setting the weights to ωm = pθ ( zmd , x ) / ∑M i=1 pθ ( z i d , x ) . MWS , however , only works on models with purely discrete latent variables . If we try to use the same approach for hybrid discrete-continuous latent variable models , all proposed continuous values are will be unique and the posterior approximation will collapse onto the MAP estimate . 2.2 IMPORTANCE SAMPLING BASED APPROXIMATE INFERENCE AND MARGINALIZATION . In our proposed method , we will rely on importance sampling ( IS ) to perform approximate inference and marginalization . In general , given an unnormalized density γ ( z ) , and its corresponding normalizing constant Z = ∫ γ ( z ) dz and normalized density π ( z ) = γ ( z ) /Z , we want to estimate Z and the expectation of an arbitrary function f , Eπ ( z ) [ f ( z ) ] . To do this , we sample K values { zk } Kk=1 from a proposal distribution ρ ( z ) , and weight each sample by wk = γ ( zk ) ρ ( zk ) , leading to the estimators Z ≈ 1 K K∑ k=1 wk = : Ẑ , Eπ ( z ) [ f ( z ) ] ≈ K∑ k=1 w̄kf ( zk ) = : Î , ( 4 ) where w̄k = wk/ ( KẐ ) is the normalized weight . The estimator Ẑ is often used to estimate marginal distributions , for example p ( x ) , with γ ( z ) being the joint distribution p ( z , x ) . It is unbiased and its variance decreases linearly with K. The estimator Î is often used to estimate the posterior expectation of gradients , for example the “ wake-φ ” gradient of RWS ( Bornschein & Bengio , 2015 ; Le et al. , 2019 ) , Epθ ( z|x ) [ −∇φ log qφ ( z|x ) ] with γ ( z ) = pθ ( z , x ) , π ( z ) = pθ ( z|x ) and f ( z ) = −∇φ log qφ ( z|x ) . This estimator is asymptotically unbiased and its asymptotic variance decreases linearly with K ( Owen , 2013 , Eq . 9.8 ) which means increasing K improves the estimator . 3 HYBRID MEMOISED WAKE-SLEEP . We propose hybrid memoised wake-sleep ( HMWS ) which extends memoised wake-sleep ( MWS ) to address the issue of memoization of continuous latent variables . In HMWS , we learn a generative model pθ ( zd , zc , x ) of hybrid discrete ( zd ) and continuous ( zc ) latent variables , and a recognition model qφ ( zd , zc|x ) which factorizes into a discrete recognition model qφ ( zd|x ) and a continuous recognition model qφ ( zc|zd , x ) . Like MWS , HMWS maintains a memory ofM discrete latent variables { zmd } Mm=1 per data point x which is updated in the wake phase of every learning iteration . In the sleep : replay phase , we use the memoized discrete latents to train both the generative model and the recognition model . In the sleep : fantasy phase , we optionally train the recognition model on data generated from the generative model as well . We summarize these learning phases in Fig . 3 , the full algorithm in Alg . 1 , and describe each learning phase in detail below . For notational clarity , we present the algorithm for a single training data point x . | The paper extends the previous memoized wake-sleep method to train complex generative models with discrete/structural and continuous latent variables. The method incorporates an assumption on the conditional dependence between the discrete and continuous latent variables and essentially employs a clever importance sampling procedure to capture the dependence. The experiments show applications to spatial and temporal compositional models where the proposed method outperforms VIMCO and reweighted wake-sleep. | SP:2a2f7fd9416563a5e07a511ce695b7931139701f |
Hybrid Memoised Wake-Sleep: Approximate Inference at the Discrete-Continuous Interface | 1 INTRODUCTION . We naturally understand the world around us in terms of discrete symbols . When looking at a scene , we automatically parse what is where , and understand the relationships between objects in the scene . We understand that there is a book and a table , and that the book is on the table . Such symbolic representations are often necessary for planning , communication and abstract reasoning . They allow the specification of goals like “ book on shelf ” or preconditions like the fact that “ book in hand ” must come before “ move hand to shelf ” , both of which are necessary for high-level planning . In communication , we ’ re forced to describe such plans , among many other things we say , in discrete words . And further , abstract reasoning requires creating new symbols by composing old ones which allows us to generalize to completely new situations . A “ tower ” structure made out of books is still understood as a “ tower ” when built out of Jenga blocks . How do we represent and learn models that support such symbolic reasoning while supporting efficient inference ? We focus on a particular class of hybrid generative models pθ ( zd , zc , x ) of observed data x with discrete latent variables zd , continuous latent variables zc and learnable parameters θ with a graphical model structure shown in Figure 1 . In particular , the discrete latent variables zd represent an underlying structure present in the data , while the remaining continuous latent variables zc represent non-structural quantities . For example , in the context of compositional scene understanding , zd can represent a scene graph comprising the object identities and the relationships between them , like “ a small green pyramid is on top of a yellow cube ; a blue doughnut leans on the yellow cube ; the large pyramid is next to the yellow cube ” while zc represents the continuous poses of these objects . In this model , we assume that object identities are discrete , symbolic variables indexing into a set of learnable primitives , parameterized by a subset of the generative model parameters θ . The idea is to make these primitives learn to represent concrete objects like “ yellow cube ” or “ large green pyramid ” from data in an unsupervised fashion . Algorithms suitable for learning such models are based on variational inference or wake-sleep . However , these algorithms are either inefficient or inapplicable to general settings . First , stochastic variational inference methods that optimize the evidence lower bound ( ELBO ) using the reparameterization trick ( Kingma & Welling , 2014 ; Rezende & Mohamed , 2015 ) are not applicable to discrete kernel : WN ( 0.49 ) + SE ( 0.49 , 0.50 ) SE ( 0.50 , 0.51 ) + PerL ( 0.50 , 1.00 , 0.50 ) PerXL ( 0.49 , 1.50 , 0.50 ) + WN ( 0.49 ) ( a ) Trends in time-series : Learning Gaussian process ( GP ) kernel to fit data ( blue ) . Extrapolation ( orange ) shown with inferred kernel expression ( below ) . latent variable models . The REINFORCE gradient estimator ( Williams , 1992 ) , on the other hand , has high variance and continuous relaxations of discrete variables ( Jang et al. , 2017 ; Maddison et al. , 2016 ) don ’ t naturally apply to stochastic control flow models ( Le et al. , 2019 ) . Second , wake-sleep methods ( Hinton et al. , 1995 ; Dayan et al. , 1995 ) like reweighted wake-sleep ( RWS ) ( Bornschein & Bengio , 2015 ; Le et al. , 2019 ) require inferring discrete latent variables at every learning iteration , without saving previously performed inferences . Memoised wake-sleep ( MWS ) ( Hewitt et al. , 2020 ) addresses this issue , but is only applicable to purely discrete latent variable models . We propose hybrid memoised wake-sleep ( HMWS ) —a method for learning and amortized inference in probabilistic generative models with hybrid discrete-continuous latent variables . HMWS combines the strengths of MWS in memoising the discrete latent variables and RWS in handling continuous latent variables . The core idea in HMWS is to memoise discrete latent variables zd and learn a separate recognition model which is used for importance-sampling based approximate inference and marginalization of the continuous latent variables zc . We empirically compare HMWS with state-of-the-art ( i ) stochastic variational inference methods that use control variates to reduce the REINFORCE gradient variance , VIMCO ( Mnih & Rezende , 2016 ) , and ( ii ) a wake-sleep extension , RWS . We show that HMWS outperforms these baselines in two domains : structured time series and compositional 3D scene understanding , respectively . 2 BACKGROUND . Our goal is to learn the parameters θ of a generative model pθ ( z , x ) of latent variables z and data x , and parameters φ of a recognition model qφ ( z|x ) which acts as an approximation to the posterior pθ ( z|x ) . This can be achieved by maximizing the evidence lower bound ELBO ( x , pθ ( z , x ) , qφ ( z|x ) ) = log pθ ( x ) − KL ( qφ ( z|x ) ||pθ ( z|x ) ) ( 1 ) which maximizes the evidence log pθ ( x ) while minimizing the Kullback-Leibler ( KL ) divergence , thereby encouraging the recognition model to approximate the posterior . If the latent variables are discrete , a standard way to estimate the gradients of the ELBO with respect to the recognition model parameters involves using the REINFORCE ( or score function ) gradient estimator ( Williams , 1992 ; Schulman et al. , 2015 ) ∇φELBO ( x , pθ ( z , x ) , qφ ( z|x ) ) ≈ log pθ ( z , x ) qφ ( z|x ) · ∇φ log qφ ( z|x ) +∇φ log pθ ( z , x ) qφ ( z|x ) ( 2 ) where z ∼ qφ ( z|x ) . However , the first term is often high-variance which makes learning inefficient . This issue can be addressed by ( i ) introducing control variates ( Mnih & Gregor , 2014 ; Mnih & Rezende , 2016 ; Tucker et al. , 2017 ; Grathwohl et al. , 2018 ) which reduce gradient variance , ( ii ) continuous-relaxation of discrete latent variables to allow differentiation ( Jang et al. , 2017 ; Maddison et al. , 2016 ) , or ( iii ) introducing a separate “ wake-sleep ” objective for learning the recognition model that is trained in different steps and sidesteps the need to differentiate through discrete latent variables ( Hinton et al. , 1995 ; Dayan et al. , 1995 ; Bornschein & Bengio , 2015 ; Le et al. , 2019 ) . 2.1 MEMOISED WAKE-SLEEP . Both ELBO-based and wake-sleep based approaches to learning require re-solving the inference task by sampling from the recognition model qφ ( z|x ) at every iteration . This repeated sampling can be wasteful , especially when only a few latent configurations explain the data well . Memoised wakesleep ( MWS ) ( Hewitt et al. , 2020 ) extends wake-sleep by introducing a memory—a set of M unique discrete latent variables { zmd } Mm=1 for each data point x which induces a variational distribution qMEM ( zd|x ) = M∑ m=1 ωmδzmd ( zd ) , ( 3 ) consisting of a weighted set of delta masses δzmd centered on the memory elements ( see also Saeedi et al . ( 2017 ) ) . This variational distribution is proven to improve the evidence lower bound ELBO ( x , pθ ( zd , x ) , qMEM ( zd|x ) ) ( see Sec . 3 of ( Hewitt et al. , 2020 ) ) by a memory-update phase comprising ( i ) the proposal of N new values { z′nd } Nn=1 ∼ qφ ( zd|x ) , ( ii ) retaining the best M values from the union of the old memory elements and newly proposed values { zmd } Mm=1 ∪ { z′ n d } Nn=1 scored by pθ ( zd , x ) , and ( iii ) setting the weights to ωm = pθ ( zmd , x ) / ∑M i=1 pθ ( z i d , x ) . MWS , however , only works on models with purely discrete latent variables . If we try to use the same approach for hybrid discrete-continuous latent variable models , all proposed continuous values are will be unique and the posterior approximation will collapse onto the MAP estimate . 2.2 IMPORTANCE SAMPLING BASED APPROXIMATE INFERENCE AND MARGINALIZATION . In our proposed method , we will rely on importance sampling ( IS ) to perform approximate inference and marginalization . In general , given an unnormalized density γ ( z ) , and its corresponding normalizing constant Z = ∫ γ ( z ) dz and normalized density π ( z ) = γ ( z ) /Z , we want to estimate Z and the expectation of an arbitrary function f , Eπ ( z ) [ f ( z ) ] . To do this , we sample K values { zk } Kk=1 from a proposal distribution ρ ( z ) , and weight each sample by wk = γ ( zk ) ρ ( zk ) , leading to the estimators Z ≈ 1 K K∑ k=1 wk = : Ẑ , Eπ ( z ) [ f ( z ) ] ≈ K∑ k=1 w̄kf ( zk ) = : Î , ( 4 ) where w̄k = wk/ ( KẐ ) is the normalized weight . The estimator Ẑ is often used to estimate marginal distributions , for example p ( x ) , with γ ( z ) being the joint distribution p ( z , x ) . It is unbiased and its variance decreases linearly with K. The estimator Î is often used to estimate the posterior expectation of gradients , for example the “ wake-φ ” gradient of RWS ( Bornschein & Bengio , 2015 ; Le et al. , 2019 ) , Epθ ( z|x ) [ −∇φ log qφ ( z|x ) ] with γ ( z ) = pθ ( z , x ) , π ( z ) = pθ ( z|x ) and f ( z ) = −∇φ log qφ ( z|x ) . This estimator is asymptotically unbiased and its asymptotic variance decreases linearly with K ( Owen , 2013 , Eq . 9.8 ) which means increasing K improves the estimator . 3 HYBRID MEMOISED WAKE-SLEEP . We propose hybrid memoised wake-sleep ( HMWS ) which extends memoised wake-sleep ( MWS ) to address the issue of memoization of continuous latent variables . In HMWS , we learn a generative model pθ ( zd , zc , x ) of hybrid discrete ( zd ) and continuous ( zc ) latent variables , and a recognition model qφ ( zd , zc|x ) which factorizes into a discrete recognition model qφ ( zd|x ) and a continuous recognition model qφ ( zc|zd , x ) . Like MWS , HMWS maintains a memory ofM discrete latent variables { zmd } Mm=1 per data point x which is updated in the wake phase of every learning iteration . In the sleep : replay phase , we use the memoized discrete latents to train both the generative model and the recognition model . In the sleep : fantasy phase , we optionally train the recognition model on data generated from the generative model as well . We summarize these learning phases in Fig . 3 , the full algorithm in Alg . 1 , and describe each learning phase in detail below . For notational clarity , we present the algorithm for a single training data point x . | The paper proposes an extension of memoised wake-sleep which allows applications to hybrid discrete-continuous graphical models. The approach is directed towards Bayesian program synthesis. Specifically, the memoised wake-sleep algorithm requires computing the joint probabiltiy of a discrete latent variable and an observation, which would require integrating out any continuous latents. This paper proposes to solve that problem using importance sampling. | SP:2a2f7fd9416563a5e07a511ce695b7931139701f |
Hybrid Memoised Wake-Sleep: Approximate Inference at the Discrete-Continuous Interface | 1 INTRODUCTION . We naturally understand the world around us in terms of discrete symbols . When looking at a scene , we automatically parse what is where , and understand the relationships between objects in the scene . We understand that there is a book and a table , and that the book is on the table . Such symbolic representations are often necessary for planning , communication and abstract reasoning . They allow the specification of goals like “ book on shelf ” or preconditions like the fact that “ book in hand ” must come before “ move hand to shelf ” , both of which are necessary for high-level planning . In communication , we ’ re forced to describe such plans , among many other things we say , in discrete words . And further , abstract reasoning requires creating new symbols by composing old ones which allows us to generalize to completely new situations . A “ tower ” structure made out of books is still understood as a “ tower ” when built out of Jenga blocks . How do we represent and learn models that support such symbolic reasoning while supporting efficient inference ? We focus on a particular class of hybrid generative models pθ ( zd , zc , x ) of observed data x with discrete latent variables zd , continuous latent variables zc and learnable parameters θ with a graphical model structure shown in Figure 1 . In particular , the discrete latent variables zd represent an underlying structure present in the data , while the remaining continuous latent variables zc represent non-structural quantities . For example , in the context of compositional scene understanding , zd can represent a scene graph comprising the object identities and the relationships between them , like “ a small green pyramid is on top of a yellow cube ; a blue doughnut leans on the yellow cube ; the large pyramid is next to the yellow cube ” while zc represents the continuous poses of these objects . In this model , we assume that object identities are discrete , symbolic variables indexing into a set of learnable primitives , parameterized by a subset of the generative model parameters θ . The idea is to make these primitives learn to represent concrete objects like “ yellow cube ” or “ large green pyramid ” from data in an unsupervised fashion . Algorithms suitable for learning such models are based on variational inference or wake-sleep . However , these algorithms are either inefficient or inapplicable to general settings . First , stochastic variational inference methods that optimize the evidence lower bound ( ELBO ) using the reparameterization trick ( Kingma & Welling , 2014 ; Rezende & Mohamed , 2015 ) are not applicable to discrete kernel : WN ( 0.49 ) + SE ( 0.49 , 0.50 ) SE ( 0.50 , 0.51 ) + PerL ( 0.50 , 1.00 , 0.50 ) PerXL ( 0.49 , 1.50 , 0.50 ) + WN ( 0.49 ) ( a ) Trends in time-series : Learning Gaussian process ( GP ) kernel to fit data ( blue ) . Extrapolation ( orange ) shown with inferred kernel expression ( below ) . latent variable models . The REINFORCE gradient estimator ( Williams , 1992 ) , on the other hand , has high variance and continuous relaxations of discrete variables ( Jang et al. , 2017 ; Maddison et al. , 2016 ) don ’ t naturally apply to stochastic control flow models ( Le et al. , 2019 ) . Second , wake-sleep methods ( Hinton et al. , 1995 ; Dayan et al. , 1995 ) like reweighted wake-sleep ( RWS ) ( Bornschein & Bengio , 2015 ; Le et al. , 2019 ) require inferring discrete latent variables at every learning iteration , without saving previously performed inferences . Memoised wake-sleep ( MWS ) ( Hewitt et al. , 2020 ) addresses this issue , but is only applicable to purely discrete latent variable models . We propose hybrid memoised wake-sleep ( HMWS ) —a method for learning and amortized inference in probabilistic generative models with hybrid discrete-continuous latent variables . HMWS combines the strengths of MWS in memoising the discrete latent variables and RWS in handling continuous latent variables . The core idea in HMWS is to memoise discrete latent variables zd and learn a separate recognition model which is used for importance-sampling based approximate inference and marginalization of the continuous latent variables zc . We empirically compare HMWS with state-of-the-art ( i ) stochastic variational inference methods that use control variates to reduce the REINFORCE gradient variance , VIMCO ( Mnih & Rezende , 2016 ) , and ( ii ) a wake-sleep extension , RWS . We show that HMWS outperforms these baselines in two domains : structured time series and compositional 3D scene understanding , respectively . 2 BACKGROUND . Our goal is to learn the parameters θ of a generative model pθ ( z , x ) of latent variables z and data x , and parameters φ of a recognition model qφ ( z|x ) which acts as an approximation to the posterior pθ ( z|x ) . This can be achieved by maximizing the evidence lower bound ELBO ( x , pθ ( z , x ) , qφ ( z|x ) ) = log pθ ( x ) − KL ( qφ ( z|x ) ||pθ ( z|x ) ) ( 1 ) which maximizes the evidence log pθ ( x ) while minimizing the Kullback-Leibler ( KL ) divergence , thereby encouraging the recognition model to approximate the posterior . If the latent variables are discrete , a standard way to estimate the gradients of the ELBO with respect to the recognition model parameters involves using the REINFORCE ( or score function ) gradient estimator ( Williams , 1992 ; Schulman et al. , 2015 ) ∇φELBO ( x , pθ ( z , x ) , qφ ( z|x ) ) ≈ log pθ ( z , x ) qφ ( z|x ) · ∇φ log qφ ( z|x ) +∇φ log pθ ( z , x ) qφ ( z|x ) ( 2 ) where z ∼ qφ ( z|x ) . However , the first term is often high-variance which makes learning inefficient . This issue can be addressed by ( i ) introducing control variates ( Mnih & Gregor , 2014 ; Mnih & Rezende , 2016 ; Tucker et al. , 2017 ; Grathwohl et al. , 2018 ) which reduce gradient variance , ( ii ) continuous-relaxation of discrete latent variables to allow differentiation ( Jang et al. , 2017 ; Maddison et al. , 2016 ) , or ( iii ) introducing a separate “ wake-sleep ” objective for learning the recognition model that is trained in different steps and sidesteps the need to differentiate through discrete latent variables ( Hinton et al. , 1995 ; Dayan et al. , 1995 ; Bornschein & Bengio , 2015 ; Le et al. , 2019 ) . 2.1 MEMOISED WAKE-SLEEP . Both ELBO-based and wake-sleep based approaches to learning require re-solving the inference task by sampling from the recognition model qφ ( z|x ) at every iteration . This repeated sampling can be wasteful , especially when only a few latent configurations explain the data well . Memoised wakesleep ( MWS ) ( Hewitt et al. , 2020 ) extends wake-sleep by introducing a memory—a set of M unique discrete latent variables { zmd } Mm=1 for each data point x which induces a variational distribution qMEM ( zd|x ) = M∑ m=1 ωmδzmd ( zd ) , ( 3 ) consisting of a weighted set of delta masses δzmd centered on the memory elements ( see also Saeedi et al . ( 2017 ) ) . This variational distribution is proven to improve the evidence lower bound ELBO ( x , pθ ( zd , x ) , qMEM ( zd|x ) ) ( see Sec . 3 of ( Hewitt et al. , 2020 ) ) by a memory-update phase comprising ( i ) the proposal of N new values { z′nd } Nn=1 ∼ qφ ( zd|x ) , ( ii ) retaining the best M values from the union of the old memory elements and newly proposed values { zmd } Mm=1 ∪ { z′ n d } Nn=1 scored by pθ ( zd , x ) , and ( iii ) setting the weights to ωm = pθ ( zmd , x ) / ∑M i=1 pθ ( z i d , x ) . MWS , however , only works on models with purely discrete latent variables . If we try to use the same approach for hybrid discrete-continuous latent variable models , all proposed continuous values are will be unique and the posterior approximation will collapse onto the MAP estimate . 2.2 IMPORTANCE SAMPLING BASED APPROXIMATE INFERENCE AND MARGINALIZATION . In our proposed method , we will rely on importance sampling ( IS ) to perform approximate inference and marginalization . In general , given an unnormalized density γ ( z ) , and its corresponding normalizing constant Z = ∫ γ ( z ) dz and normalized density π ( z ) = γ ( z ) /Z , we want to estimate Z and the expectation of an arbitrary function f , Eπ ( z ) [ f ( z ) ] . To do this , we sample K values { zk } Kk=1 from a proposal distribution ρ ( z ) , and weight each sample by wk = γ ( zk ) ρ ( zk ) , leading to the estimators Z ≈ 1 K K∑ k=1 wk = : Ẑ , Eπ ( z ) [ f ( z ) ] ≈ K∑ k=1 w̄kf ( zk ) = : Î , ( 4 ) where w̄k = wk/ ( KẐ ) is the normalized weight . The estimator Ẑ is often used to estimate marginal distributions , for example p ( x ) , with γ ( z ) being the joint distribution p ( z , x ) . It is unbiased and its variance decreases linearly with K. The estimator Î is often used to estimate the posterior expectation of gradients , for example the “ wake-φ ” gradient of RWS ( Bornschein & Bengio , 2015 ; Le et al. , 2019 ) , Epθ ( z|x ) [ −∇φ log qφ ( z|x ) ] with γ ( z ) = pθ ( z , x ) , π ( z ) = pθ ( z|x ) and f ( z ) = −∇φ log qφ ( z|x ) . This estimator is asymptotically unbiased and its asymptotic variance decreases linearly with K ( Owen , 2013 , Eq . 9.8 ) which means increasing K improves the estimator . 3 HYBRID MEMOISED WAKE-SLEEP . We propose hybrid memoised wake-sleep ( HMWS ) which extends memoised wake-sleep ( MWS ) to address the issue of memoization of continuous latent variables . In HMWS , we learn a generative model pθ ( zd , zc , x ) of hybrid discrete ( zd ) and continuous ( zc ) latent variables , and a recognition model qφ ( zd , zc|x ) which factorizes into a discrete recognition model qφ ( zd|x ) and a continuous recognition model qφ ( zc|zd , x ) . Like MWS , HMWS maintains a memory ofM discrete latent variables { zmd } Mm=1 per data point x which is updated in the wake phase of every learning iteration . In the sleep : replay phase , we use the memoized discrete latents to train both the generative model and the recognition model . In the sleep : fantasy phase , we optionally train the recognition model on data generated from the generative model as well . We summarize these learning phases in Fig . 3 , the full algorithm in Alg . 1 , and describe each learning phase in detail below . For notational clarity , we present the algorithm for a single training data point x . | Memoized wake-sleep is a past method that builds a variational approximate posterior over discrete latent variables by memorising previously drawn samples. However, memorised wake-sleep can currently be used only on models that are purely discrete. This paper extends memoized wake-sleep by providing a mechanism for using VI to integrate over continuous latent variables. | SP:2a2f7fd9416563a5e07a511ce695b7931139701f |
Sample and Computation Redistribution for Efficient Face Detection | 1 INTRODUCTION . Face detection is a long-standing problem in computer vision with many applications , such as face alignment ( Bulat & Tzimiropoulos , 2017 ) , face reconstruction ( Feng et al. , 2018 ) , face attribute analysis ( Zhang et al. , 2018 ; Pan et al. , 2018 ) , and face recognition ( Schroff et al. , 2015 ; Deng et al. , 2019 ) . Following the pioneering work of ( Viola & Jones , 2004 ) , numerous face detection algorithms have been designed . Among them , the single-shot anchor-based approaches ( Najibi et al. , 2017 ; Zhang et al. , 2017b ; Tang et al. , 2018 ; Li et al. , 2019 ; Ming et al. , 2019 ; Deng et al. , 2020 ; Liu et al. , 2020 ; Zhu et al. , 2020 ) have recently demonstrated very promising performance . In particular , on the most challenging face detection dataset , WIDER FACE ( Yang et al. , 2016 ) , the average precision ( AP ) on its hard validation set has been boosted to 93.4 % by TinaFace ( Zhu et al. , 2020 ) . Even though TinaFace ( Zhu et al. , 2020 ) achieves impressive results on unconstrained face detection , it employs large-scale ( e.g . 1 , 650 pixels ) testing , which consumes huge amounts of computational resources . In addition , TinaFace design is based on a generic object detector ( i.e . RetinaNet ( Lin et al. , 2017b ) ) , directly taking the classification network as the backbone , tiling dense anchors on the multi-scale feature maps ( i.e . P2 to P7 of neck ) , and adopting heavy head designs . Without considering the prior of faces , the network design of TinaFace is thus redundant and sub-optimal . One approach of optimizing such networks ’ performance is computation redistribution . Since directly taking the backbone of the classification network for object detection is sub-optimal , the recent CR-NAS ( Liang et al. , 2020 ) reallocates the computation across different resolutions to obtain a more balanced Effective Receptive Field ( ERF ) , leading to higher detection performance . In BFbox ( Liu & Tang , 2020 ) , a face-appropriate search space is designed , based on the observation of scale distribution gap between general object detection and face detection . In ASFD ( Zhang et al. , 2020a ) , a differential architecture search is employed to discover optimized feature enhance modules for efficient multi-scale feature fusion and context enhancement . Even though ( Liu & Tang , 2020 ; Zhang et al. , 2020a ) have realized the limitation of directly applying general backbone , neck and head settings to face detection , CR-NAS ( Liang et al. , 2020 ) only focuses the optimization on backbone , BFbox ( Liu & Tang , 2020 ) neglects the optimization of head , and ASFD ( Zhang et al. , 2020a ) only explores the best design for neck . Another optimization approach , is the sample redistribution across different scales . Due to the extremely large scale variance of faces in real-world scenarios , different scale augmentation strategies are employed to introduce scale adaptation into the face detector . The most widely used scale augmentation approaches include random square crop ( Zhang et al. , 2017b ; Deng et al. , 2020 ; Zhu et al. , 2020 ) and data anchor sampling ( Tang et al. , 2018 ) . Nevertheless , the scale augmentation parameters in these methods are manually designed for all different network structures . Therefore , traditional multi-scale training in face detection is also tedious and sub-optimal . Since VGA resolution ( 640 × 480 ) is widely used for efficient face detection on numerous mobile phones and digital cameras , we focus on efficient face detection from low-resolution images in this paper . In Fig 1 ( a ) , we give the cumulative face scale distribution on the WIDER FACE validation dataset . Under the VGA resolution , most of the faces ( 78.93 % ) in WIDER FACE are smaller than 32×32 pixels . Under this specific scale distribution , both network structure and scale augmentation need to be optimized . In this work , we present a meticulously designed methodology of search space optimization , that addresses both the redistribution between the backbone , neck and head , and the sample redistribution between the most needed scales . As the structure of a face detector determines the distribution of computation and is the key in determining its accuracy and efficiency , we first discover principles of computation distribution under different flop regimes . Inspired by ( Radosavovic et al. , 2020 ) , we control the degrees of freedom and reduce the search space . More specifically , we randomly sample model architectures with different configurations on backbone ( stem and four stages ) , neck and head . Based on the statistics of these models , we compute the empirical bootstrap ( Efron & Tibshirani , 1994 ) and estimate the likely range in which the best models fall . To further decrease the complexity of the search space , we divide the computation ratio estimation for backbone and the whole detector into two steps . To handle extreme scale variations in face detection , we also design a search-able zoom-in and zoom-out space , specified by discrete scales and binary probabilities . In experiments , the proposed computation redistribution and sample redistribution yield significant and consistent improvement on various compute regimes , even surpassing a range of state-of-the-art face detectors by using much fewer flops as shown in Fig . 1 ( b ) . To sum up , this paper makes following contributions : • We have proposed a simplified search space , as well as a two-step search strategy for computation redistribution across different components ( backbone , neck and head ) of a face detector . The proposed computation redistribution method can easily boost detection performance through random search . • We have designed a search-able zoom-in and zoom-out space for face-specific scale augmentation , which automatically redistributes more training samples for shallow stages , enhancing the detection performance on small faces . • Extensive experiments conducted on WIDER FACE demonstrate the significantly improved accuracy and efficiency trade-off of the proposed SCRFD across a wide range of compute regimes . 2 RELATED WORK . Face Detection . To deal with extreme variations ( e.g . scale , pose , illumination and occlusion ) in face detection ( Yang et al. , 2016 ) , most of the recent single-shot face detectors focus on improving the anchor sampling/matching or feature enhancement . SSH ( Najibi et al. , 2017 ) builds detection modules on different feature maps with a rich receptive field . S3FD ( Zhang et al. , 2017b ) introduces an anchor compensation strategy by offsetting anchors for outer faces . PyramidBox ( Tang et al. , 2018 ) formulates a data-anchor-sampling strategy to increase the proportion of small faces in the training data . DSFD ( Li et al. , 2019 ) introduces small faces supervision signals on the backbone , which implicitly boosts the performance of pyramid features . Group sampling ( Ming et al. , 2019 ) emphasizes the importance of the ratio for matched and unmatched anchors . RetinaFace ( Deng et al. , 2020 ) employs deform-able context modules and additional landmark annotations to improve the performance of face detection . HAMBox ( Liu et al. , 2020 ) finds that many unmatched anchors in the training phase also have strong localization ability and proposes an online high-quality anchor mining strategy to assign high-quality anchors for outer faces . BFbox ( Liu & Tang , 2020 ) employs a single-path one-shot search method ( Guo et al. , 2019 ) to jointly optimize the backbone and neck for face detector . ASFD ( Zhang et al. , 2020a ) explores a differential architecture search to discover optimized feature enhance modules for efficient multi-scale feature fusion and context enhancement . All these methods are either designed by expert experience or partially optimized on backbone , neck and head . By contrast , we search for computation redistribution across different components ( backbone , neck and head ) of a face detector across a wide range of compute regimes . Neural Architecture Search . Given a fixed search space of possible networks , Neural Architecture Search ( NAS ) automatically finds a good model within the search space . DetNAS ( Chen et al. , 2019b ) adopts the evolution algorithm for the backbone search to boost object detection on COCO ( Lin et al. , 2014 ) . By contrast , CR-NAS ( Liang et al. , 2020 ) reallocates the computation across different stages within the backbone to improve object detection . NAS-FPN ( Ghiasi et al. , 2019 ) uses reinforcement learning to search the proper FPN for general object detection . As there is an obvious distribution gap between COCO ( Lin et al. , 2014 ) and WIDER FACE ( Yang et al. , 2016 ) , the experience in the above methods is not directly applicable for face detection but gives us an inspiration that the backbone , neck and head can be optimized to enhance the performance of face detection . Inspired by RegNet ( Radosavovic et al. , 2020 ) , we optimize the computation distribution on backbone , neck and head based on the statistics from a group of random sampled models . We successfully reduce the search space and find the stable computation distribution under a particular complex regime , which significantly improves the model ’ s performance . 3 METHODOLOGY . To efficiently and accurately detect small faces from low-resolution images ( e.g . VGA 640 × 480 ) , we propose two methodologies that , when combined , outperform the state-of-the-art . In Sec . 3.1 , we explore the computation redistribution across different stages of backbone , as well as different components ( i.e . backbone , neck and head ) of the whole detector , given a pre-defined computation budget . Then , in Sec . 3.2 , we investigate the redistribution of positive training samples across different scales of feature maps by searching optimized scale augmentations . 3.1 COMPUTATION REDISTRIBUTION . As illustrated in Fig . 2 , we apply our search method on a network consisting of ( 1 ) RetinaNet ( Lin et al. , 2017a ) , with ResNet ( He et al. , 2016 ) as the backbone , ( 2 ) Path Aggregation Feature Pyramid Network ( PAFPN ) ( Liu et al. , 2018 ) as the neck , and ( 3 ) stacked 3 × 3 convolutional layers for the head . Despite the generally simple structure , the total number of possible network configurations of the search space becomes unwieldy . Therefore , we attempt to simplify the tremendous search space and arrive at a low-dimensional design space , consisting of simple and effective networks . 3.1.1 SEARCH SPACE DESIGN . Inspired by RegNet ( Radosavovic et al. , 2020 ) , we explore the structures of face detectors , assuming fixed standard network blocks ( i.e. , basic residual or bottleneck blocks with a fixed bottleneck ratio of 4 ) . In our case , the structure of a face detector includes : • the backbone stem , three 3× 3 convolutional layers with w1 output channels ( He et al. , 2019a ) . • the backbone body , four stages ( i.e . C2 , C3 , C4 and C5 ) operating at progressively reduced resolution , with each stage consisting of a sequence of identical blocks . For each stage i , the degrees of freedom include the number of blocks di ( i.e . network depth ) and the block width wi ( i.e . number of channels ) . • the neck , a multi-scale feature aggregation module by a top-down path and a bottom-up path with n channels ( Liu et al. , 2018 ) . • the head , with hi channels of m blocks to predict face scores and regress face boxes . The search space can be initially designed as follows . As the channel number of the stem is equal to the block width of the first residual block in C2 , the degree of freedom of the stem w1 can be merged into w2 . In addition , we employ a shared head design for three-scale of feature maps and fix the channel number for all 3 × 3 convolutional layers within the heads . Therefore , we reduce the degrees of freedom to three within our neck and head design : ( 1 ) output channel number n for neck , ( 2 ) output channel number h for head , and ( 3 ) the number of 3 × 3 convolutional layers m. We perform uniform sampling of n ≤ 256 , h ≤ 256 , and m ≤ 6 ( both n and h are divisible by 8 ) . The backbone search space has 8 degrees of freedom as there are 4 stages and each stage i has 2 parameters : the number of blocks di and block width wi . Following RegNet ( Radosavovic et al. , 2020 ) , we perform uniform sampling of di ≤ 24 and wi ≤ 512 ( wi is divisible by 8 ) . As state-ofthe-art backbones have increasing widths ( Radosavovic et al. , 2020 ) , we also constrain the search space , according to the principle of wi+1 ≥ wi . | The author is motivated by two simple but effective methods: 1) Computation Redistribution (CR) which reallocates the computation between the backbone, neck and head calculation; 2) Sample Redistribution (SR) augments training samples for the most needed stages. The author uses a simplified search space for computation redistribution across different components and designs a searchable zoom-in and zoom-out space for face-specific scale augmentation. The SCRFD-34GF yields state-of-the-art performances in many datasets(e.g. WIDER FACE). | SP:4409ff223cb7efac36bc1daa8b3a86c772416f90 |
Sample and Computation Redistribution for Efficient Face Detection | 1 INTRODUCTION . Face detection is a long-standing problem in computer vision with many applications , such as face alignment ( Bulat & Tzimiropoulos , 2017 ) , face reconstruction ( Feng et al. , 2018 ) , face attribute analysis ( Zhang et al. , 2018 ; Pan et al. , 2018 ) , and face recognition ( Schroff et al. , 2015 ; Deng et al. , 2019 ) . Following the pioneering work of ( Viola & Jones , 2004 ) , numerous face detection algorithms have been designed . Among them , the single-shot anchor-based approaches ( Najibi et al. , 2017 ; Zhang et al. , 2017b ; Tang et al. , 2018 ; Li et al. , 2019 ; Ming et al. , 2019 ; Deng et al. , 2020 ; Liu et al. , 2020 ; Zhu et al. , 2020 ) have recently demonstrated very promising performance . In particular , on the most challenging face detection dataset , WIDER FACE ( Yang et al. , 2016 ) , the average precision ( AP ) on its hard validation set has been boosted to 93.4 % by TinaFace ( Zhu et al. , 2020 ) . Even though TinaFace ( Zhu et al. , 2020 ) achieves impressive results on unconstrained face detection , it employs large-scale ( e.g . 1 , 650 pixels ) testing , which consumes huge amounts of computational resources . In addition , TinaFace design is based on a generic object detector ( i.e . RetinaNet ( Lin et al. , 2017b ) ) , directly taking the classification network as the backbone , tiling dense anchors on the multi-scale feature maps ( i.e . P2 to P7 of neck ) , and adopting heavy head designs . Without considering the prior of faces , the network design of TinaFace is thus redundant and sub-optimal . One approach of optimizing such networks ’ performance is computation redistribution . Since directly taking the backbone of the classification network for object detection is sub-optimal , the recent CR-NAS ( Liang et al. , 2020 ) reallocates the computation across different resolutions to obtain a more balanced Effective Receptive Field ( ERF ) , leading to higher detection performance . In BFbox ( Liu & Tang , 2020 ) , a face-appropriate search space is designed , based on the observation of scale distribution gap between general object detection and face detection . In ASFD ( Zhang et al. , 2020a ) , a differential architecture search is employed to discover optimized feature enhance modules for efficient multi-scale feature fusion and context enhancement . Even though ( Liu & Tang , 2020 ; Zhang et al. , 2020a ) have realized the limitation of directly applying general backbone , neck and head settings to face detection , CR-NAS ( Liang et al. , 2020 ) only focuses the optimization on backbone , BFbox ( Liu & Tang , 2020 ) neglects the optimization of head , and ASFD ( Zhang et al. , 2020a ) only explores the best design for neck . Another optimization approach , is the sample redistribution across different scales . Due to the extremely large scale variance of faces in real-world scenarios , different scale augmentation strategies are employed to introduce scale adaptation into the face detector . The most widely used scale augmentation approaches include random square crop ( Zhang et al. , 2017b ; Deng et al. , 2020 ; Zhu et al. , 2020 ) and data anchor sampling ( Tang et al. , 2018 ) . Nevertheless , the scale augmentation parameters in these methods are manually designed for all different network structures . Therefore , traditional multi-scale training in face detection is also tedious and sub-optimal . Since VGA resolution ( 640 × 480 ) is widely used for efficient face detection on numerous mobile phones and digital cameras , we focus on efficient face detection from low-resolution images in this paper . In Fig 1 ( a ) , we give the cumulative face scale distribution on the WIDER FACE validation dataset . Under the VGA resolution , most of the faces ( 78.93 % ) in WIDER FACE are smaller than 32×32 pixels . Under this specific scale distribution , both network structure and scale augmentation need to be optimized . In this work , we present a meticulously designed methodology of search space optimization , that addresses both the redistribution between the backbone , neck and head , and the sample redistribution between the most needed scales . As the structure of a face detector determines the distribution of computation and is the key in determining its accuracy and efficiency , we first discover principles of computation distribution under different flop regimes . Inspired by ( Radosavovic et al. , 2020 ) , we control the degrees of freedom and reduce the search space . More specifically , we randomly sample model architectures with different configurations on backbone ( stem and four stages ) , neck and head . Based on the statistics of these models , we compute the empirical bootstrap ( Efron & Tibshirani , 1994 ) and estimate the likely range in which the best models fall . To further decrease the complexity of the search space , we divide the computation ratio estimation for backbone and the whole detector into two steps . To handle extreme scale variations in face detection , we also design a search-able zoom-in and zoom-out space , specified by discrete scales and binary probabilities . In experiments , the proposed computation redistribution and sample redistribution yield significant and consistent improvement on various compute regimes , even surpassing a range of state-of-the-art face detectors by using much fewer flops as shown in Fig . 1 ( b ) . To sum up , this paper makes following contributions : • We have proposed a simplified search space , as well as a two-step search strategy for computation redistribution across different components ( backbone , neck and head ) of a face detector . The proposed computation redistribution method can easily boost detection performance through random search . • We have designed a search-able zoom-in and zoom-out space for face-specific scale augmentation , which automatically redistributes more training samples for shallow stages , enhancing the detection performance on small faces . • Extensive experiments conducted on WIDER FACE demonstrate the significantly improved accuracy and efficiency trade-off of the proposed SCRFD across a wide range of compute regimes . 2 RELATED WORK . Face Detection . To deal with extreme variations ( e.g . scale , pose , illumination and occlusion ) in face detection ( Yang et al. , 2016 ) , most of the recent single-shot face detectors focus on improving the anchor sampling/matching or feature enhancement . SSH ( Najibi et al. , 2017 ) builds detection modules on different feature maps with a rich receptive field . S3FD ( Zhang et al. , 2017b ) introduces an anchor compensation strategy by offsetting anchors for outer faces . PyramidBox ( Tang et al. , 2018 ) formulates a data-anchor-sampling strategy to increase the proportion of small faces in the training data . DSFD ( Li et al. , 2019 ) introduces small faces supervision signals on the backbone , which implicitly boosts the performance of pyramid features . Group sampling ( Ming et al. , 2019 ) emphasizes the importance of the ratio for matched and unmatched anchors . RetinaFace ( Deng et al. , 2020 ) employs deform-able context modules and additional landmark annotations to improve the performance of face detection . HAMBox ( Liu et al. , 2020 ) finds that many unmatched anchors in the training phase also have strong localization ability and proposes an online high-quality anchor mining strategy to assign high-quality anchors for outer faces . BFbox ( Liu & Tang , 2020 ) employs a single-path one-shot search method ( Guo et al. , 2019 ) to jointly optimize the backbone and neck for face detector . ASFD ( Zhang et al. , 2020a ) explores a differential architecture search to discover optimized feature enhance modules for efficient multi-scale feature fusion and context enhancement . All these methods are either designed by expert experience or partially optimized on backbone , neck and head . By contrast , we search for computation redistribution across different components ( backbone , neck and head ) of a face detector across a wide range of compute regimes . Neural Architecture Search . Given a fixed search space of possible networks , Neural Architecture Search ( NAS ) automatically finds a good model within the search space . DetNAS ( Chen et al. , 2019b ) adopts the evolution algorithm for the backbone search to boost object detection on COCO ( Lin et al. , 2014 ) . By contrast , CR-NAS ( Liang et al. , 2020 ) reallocates the computation across different stages within the backbone to improve object detection . NAS-FPN ( Ghiasi et al. , 2019 ) uses reinforcement learning to search the proper FPN for general object detection . As there is an obvious distribution gap between COCO ( Lin et al. , 2014 ) and WIDER FACE ( Yang et al. , 2016 ) , the experience in the above methods is not directly applicable for face detection but gives us an inspiration that the backbone , neck and head can be optimized to enhance the performance of face detection . Inspired by RegNet ( Radosavovic et al. , 2020 ) , we optimize the computation distribution on backbone , neck and head based on the statistics from a group of random sampled models . We successfully reduce the search space and find the stable computation distribution under a particular complex regime , which significantly improves the model ’ s performance . 3 METHODOLOGY . To efficiently and accurately detect small faces from low-resolution images ( e.g . VGA 640 × 480 ) , we propose two methodologies that , when combined , outperform the state-of-the-art . In Sec . 3.1 , we explore the computation redistribution across different stages of backbone , as well as different components ( i.e . backbone , neck and head ) of the whole detector , given a pre-defined computation budget . Then , in Sec . 3.2 , we investigate the redistribution of positive training samples across different scales of feature maps by searching optimized scale augmentations . 3.1 COMPUTATION REDISTRIBUTION . As illustrated in Fig . 2 , we apply our search method on a network consisting of ( 1 ) RetinaNet ( Lin et al. , 2017a ) , with ResNet ( He et al. , 2016 ) as the backbone , ( 2 ) Path Aggregation Feature Pyramid Network ( PAFPN ) ( Liu et al. , 2018 ) as the neck , and ( 3 ) stacked 3 × 3 convolutional layers for the head . Despite the generally simple structure , the total number of possible network configurations of the search space becomes unwieldy . Therefore , we attempt to simplify the tremendous search space and arrive at a low-dimensional design space , consisting of simple and effective networks . 3.1.1 SEARCH SPACE DESIGN . Inspired by RegNet ( Radosavovic et al. , 2020 ) , we explore the structures of face detectors , assuming fixed standard network blocks ( i.e. , basic residual or bottleneck blocks with a fixed bottleneck ratio of 4 ) . In our case , the structure of a face detector includes : • the backbone stem , three 3× 3 convolutional layers with w1 output channels ( He et al. , 2019a ) . • the backbone body , four stages ( i.e . C2 , C3 , C4 and C5 ) operating at progressively reduced resolution , with each stage consisting of a sequence of identical blocks . For each stage i , the degrees of freedom include the number of blocks di ( i.e . network depth ) and the block width wi ( i.e . number of channels ) . • the neck , a multi-scale feature aggregation module by a top-down path and a bottom-up path with n channels ( Liu et al. , 2018 ) . • the head , with hi channels of m blocks to predict face scores and regress face boxes . The search space can be initially designed as follows . As the channel number of the stem is equal to the block width of the first residual block in C2 , the degree of freedom of the stem w1 can be merged into w2 . In addition , we employ a shared head design for three-scale of feature maps and fix the channel number for all 3 × 3 convolutional layers within the heads . Therefore , we reduce the degrees of freedom to three within our neck and head design : ( 1 ) output channel number n for neck , ( 2 ) output channel number h for head , and ( 3 ) the number of 3 × 3 convolutional layers m. We perform uniform sampling of n ≤ 256 , h ≤ 256 , and m ≤ 6 ( both n and h are divisible by 8 ) . The backbone search space has 8 degrees of freedom as there are 4 stages and each stage i has 2 parameters : the number of blocks di and block width wi . Following RegNet ( Radosavovic et al. , 2020 ) , we perform uniform sampling of di ≤ 24 and wi ≤ 512 ( wi is divisible by 8 ) . As state-ofthe-art backbones have increasing widths ( Radosavovic et al. , 2020 ) , we also constrain the search space , according to the principle of wi+1 ≥ wi . | The authors proposed face detection algorithm based on the optimized network architecture and data sampling strategy. Their novelties are two-fold: one is Computation Redistribution (CR) which optimally reallocates the computation between the backbone, neck and head of the model, and another is Sample Redistribution (SR) which automatically redistributes more training samples for shallow stages. The ablation study showed that both of proposed methods are effective and the comparative study showed that their whole pipeline achieved the highest accuracy among state-of-the-art methods on the public WIDER FACE dataset. | SP:4409ff223cb7efac36bc1daa8b3a86c772416f90 |
Sample and Computation Redistribution for Efficient Face Detection | 1 INTRODUCTION . Face detection is a long-standing problem in computer vision with many applications , such as face alignment ( Bulat & Tzimiropoulos , 2017 ) , face reconstruction ( Feng et al. , 2018 ) , face attribute analysis ( Zhang et al. , 2018 ; Pan et al. , 2018 ) , and face recognition ( Schroff et al. , 2015 ; Deng et al. , 2019 ) . Following the pioneering work of ( Viola & Jones , 2004 ) , numerous face detection algorithms have been designed . Among them , the single-shot anchor-based approaches ( Najibi et al. , 2017 ; Zhang et al. , 2017b ; Tang et al. , 2018 ; Li et al. , 2019 ; Ming et al. , 2019 ; Deng et al. , 2020 ; Liu et al. , 2020 ; Zhu et al. , 2020 ) have recently demonstrated very promising performance . In particular , on the most challenging face detection dataset , WIDER FACE ( Yang et al. , 2016 ) , the average precision ( AP ) on its hard validation set has been boosted to 93.4 % by TinaFace ( Zhu et al. , 2020 ) . Even though TinaFace ( Zhu et al. , 2020 ) achieves impressive results on unconstrained face detection , it employs large-scale ( e.g . 1 , 650 pixels ) testing , which consumes huge amounts of computational resources . In addition , TinaFace design is based on a generic object detector ( i.e . RetinaNet ( Lin et al. , 2017b ) ) , directly taking the classification network as the backbone , tiling dense anchors on the multi-scale feature maps ( i.e . P2 to P7 of neck ) , and adopting heavy head designs . Without considering the prior of faces , the network design of TinaFace is thus redundant and sub-optimal . One approach of optimizing such networks ’ performance is computation redistribution . Since directly taking the backbone of the classification network for object detection is sub-optimal , the recent CR-NAS ( Liang et al. , 2020 ) reallocates the computation across different resolutions to obtain a more balanced Effective Receptive Field ( ERF ) , leading to higher detection performance . In BFbox ( Liu & Tang , 2020 ) , a face-appropriate search space is designed , based on the observation of scale distribution gap between general object detection and face detection . In ASFD ( Zhang et al. , 2020a ) , a differential architecture search is employed to discover optimized feature enhance modules for efficient multi-scale feature fusion and context enhancement . Even though ( Liu & Tang , 2020 ; Zhang et al. , 2020a ) have realized the limitation of directly applying general backbone , neck and head settings to face detection , CR-NAS ( Liang et al. , 2020 ) only focuses the optimization on backbone , BFbox ( Liu & Tang , 2020 ) neglects the optimization of head , and ASFD ( Zhang et al. , 2020a ) only explores the best design for neck . Another optimization approach , is the sample redistribution across different scales . Due to the extremely large scale variance of faces in real-world scenarios , different scale augmentation strategies are employed to introduce scale adaptation into the face detector . The most widely used scale augmentation approaches include random square crop ( Zhang et al. , 2017b ; Deng et al. , 2020 ; Zhu et al. , 2020 ) and data anchor sampling ( Tang et al. , 2018 ) . Nevertheless , the scale augmentation parameters in these methods are manually designed for all different network structures . Therefore , traditional multi-scale training in face detection is also tedious and sub-optimal . Since VGA resolution ( 640 × 480 ) is widely used for efficient face detection on numerous mobile phones and digital cameras , we focus on efficient face detection from low-resolution images in this paper . In Fig 1 ( a ) , we give the cumulative face scale distribution on the WIDER FACE validation dataset . Under the VGA resolution , most of the faces ( 78.93 % ) in WIDER FACE are smaller than 32×32 pixels . Under this specific scale distribution , both network structure and scale augmentation need to be optimized . In this work , we present a meticulously designed methodology of search space optimization , that addresses both the redistribution between the backbone , neck and head , and the sample redistribution between the most needed scales . As the structure of a face detector determines the distribution of computation and is the key in determining its accuracy and efficiency , we first discover principles of computation distribution under different flop regimes . Inspired by ( Radosavovic et al. , 2020 ) , we control the degrees of freedom and reduce the search space . More specifically , we randomly sample model architectures with different configurations on backbone ( stem and four stages ) , neck and head . Based on the statistics of these models , we compute the empirical bootstrap ( Efron & Tibshirani , 1994 ) and estimate the likely range in which the best models fall . To further decrease the complexity of the search space , we divide the computation ratio estimation for backbone and the whole detector into two steps . To handle extreme scale variations in face detection , we also design a search-able zoom-in and zoom-out space , specified by discrete scales and binary probabilities . In experiments , the proposed computation redistribution and sample redistribution yield significant and consistent improvement on various compute regimes , even surpassing a range of state-of-the-art face detectors by using much fewer flops as shown in Fig . 1 ( b ) . To sum up , this paper makes following contributions : • We have proposed a simplified search space , as well as a two-step search strategy for computation redistribution across different components ( backbone , neck and head ) of a face detector . The proposed computation redistribution method can easily boost detection performance through random search . • We have designed a search-able zoom-in and zoom-out space for face-specific scale augmentation , which automatically redistributes more training samples for shallow stages , enhancing the detection performance on small faces . • Extensive experiments conducted on WIDER FACE demonstrate the significantly improved accuracy and efficiency trade-off of the proposed SCRFD across a wide range of compute regimes . 2 RELATED WORK . Face Detection . To deal with extreme variations ( e.g . scale , pose , illumination and occlusion ) in face detection ( Yang et al. , 2016 ) , most of the recent single-shot face detectors focus on improving the anchor sampling/matching or feature enhancement . SSH ( Najibi et al. , 2017 ) builds detection modules on different feature maps with a rich receptive field . S3FD ( Zhang et al. , 2017b ) introduces an anchor compensation strategy by offsetting anchors for outer faces . PyramidBox ( Tang et al. , 2018 ) formulates a data-anchor-sampling strategy to increase the proportion of small faces in the training data . DSFD ( Li et al. , 2019 ) introduces small faces supervision signals on the backbone , which implicitly boosts the performance of pyramid features . Group sampling ( Ming et al. , 2019 ) emphasizes the importance of the ratio for matched and unmatched anchors . RetinaFace ( Deng et al. , 2020 ) employs deform-able context modules and additional landmark annotations to improve the performance of face detection . HAMBox ( Liu et al. , 2020 ) finds that many unmatched anchors in the training phase also have strong localization ability and proposes an online high-quality anchor mining strategy to assign high-quality anchors for outer faces . BFbox ( Liu & Tang , 2020 ) employs a single-path one-shot search method ( Guo et al. , 2019 ) to jointly optimize the backbone and neck for face detector . ASFD ( Zhang et al. , 2020a ) explores a differential architecture search to discover optimized feature enhance modules for efficient multi-scale feature fusion and context enhancement . All these methods are either designed by expert experience or partially optimized on backbone , neck and head . By contrast , we search for computation redistribution across different components ( backbone , neck and head ) of a face detector across a wide range of compute regimes . Neural Architecture Search . Given a fixed search space of possible networks , Neural Architecture Search ( NAS ) automatically finds a good model within the search space . DetNAS ( Chen et al. , 2019b ) adopts the evolution algorithm for the backbone search to boost object detection on COCO ( Lin et al. , 2014 ) . By contrast , CR-NAS ( Liang et al. , 2020 ) reallocates the computation across different stages within the backbone to improve object detection . NAS-FPN ( Ghiasi et al. , 2019 ) uses reinforcement learning to search the proper FPN for general object detection . As there is an obvious distribution gap between COCO ( Lin et al. , 2014 ) and WIDER FACE ( Yang et al. , 2016 ) , the experience in the above methods is not directly applicable for face detection but gives us an inspiration that the backbone , neck and head can be optimized to enhance the performance of face detection . Inspired by RegNet ( Radosavovic et al. , 2020 ) , we optimize the computation distribution on backbone , neck and head based on the statistics from a group of random sampled models . We successfully reduce the search space and find the stable computation distribution under a particular complex regime , which significantly improves the model ’ s performance . 3 METHODOLOGY . To efficiently and accurately detect small faces from low-resolution images ( e.g . VGA 640 × 480 ) , we propose two methodologies that , when combined , outperform the state-of-the-art . In Sec . 3.1 , we explore the computation redistribution across different stages of backbone , as well as different components ( i.e . backbone , neck and head ) of the whole detector , given a pre-defined computation budget . Then , in Sec . 3.2 , we investigate the redistribution of positive training samples across different scales of feature maps by searching optimized scale augmentations . 3.1 COMPUTATION REDISTRIBUTION . As illustrated in Fig . 2 , we apply our search method on a network consisting of ( 1 ) RetinaNet ( Lin et al. , 2017a ) , with ResNet ( He et al. , 2016 ) as the backbone , ( 2 ) Path Aggregation Feature Pyramid Network ( PAFPN ) ( Liu et al. , 2018 ) as the neck , and ( 3 ) stacked 3 × 3 convolutional layers for the head . Despite the generally simple structure , the total number of possible network configurations of the search space becomes unwieldy . Therefore , we attempt to simplify the tremendous search space and arrive at a low-dimensional design space , consisting of simple and effective networks . 3.1.1 SEARCH SPACE DESIGN . Inspired by RegNet ( Radosavovic et al. , 2020 ) , we explore the structures of face detectors , assuming fixed standard network blocks ( i.e. , basic residual or bottleneck blocks with a fixed bottleneck ratio of 4 ) . In our case , the structure of a face detector includes : • the backbone stem , three 3× 3 convolutional layers with w1 output channels ( He et al. , 2019a ) . • the backbone body , four stages ( i.e . C2 , C3 , C4 and C5 ) operating at progressively reduced resolution , with each stage consisting of a sequence of identical blocks . For each stage i , the degrees of freedom include the number of blocks di ( i.e . network depth ) and the block width wi ( i.e . number of channels ) . • the neck , a multi-scale feature aggregation module by a top-down path and a bottom-up path with n channels ( Liu et al. , 2018 ) . • the head , with hi channels of m blocks to predict face scores and regress face boxes . The search space can be initially designed as follows . As the channel number of the stem is equal to the block width of the first residual block in C2 , the degree of freedom of the stem w1 can be merged into w2 . In addition , we employ a shared head design for three-scale of feature maps and fix the channel number for all 3 × 3 convolutional layers within the heads . Therefore , we reduce the degrees of freedom to three within our neck and head design : ( 1 ) output channel number n for neck , ( 2 ) output channel number h for head , and ( 3 ) the number of 3 × 3 convolutional layers m. We perform uniform sampling of n ≤ 256 , h ≤ 256 , and m ≤ 6 ( both n and h are divisible by 8 ) . The backbone search space has 8 degrees of freedom as there are 4 stages and each stage i has 2 parameters : the number of blocks di and block width wi . Following RegNet ( Radosavovic et al. , 2020 ) , we perform uniform sampling of di ≤ 24 and wi ≤ 512 ( wi is divisible by 8 ) . As state-ofthe-art backbones have increasing widths ( Radosavovic et al. , 2020 ) , we also constrain the search space , according to the principle of wi+1 ≥ wi . | This paper presents a face detection method that aims to deal with two challenges in unconstrained face detection: high computation cost and detecting small faces. Specifically, the authors adopted a network structure search method along with a two-step searching strategy for computation redistribution between the backbone, neck and head of the network. Sample redistribution between the scales is achieved using a searchable zoom-in and zoom-out space for face scale augmentation. Experiments were conducted on a benchmark dataset: WIDER FACE. | SP:4409ff223cb7efac36bc1daa8b3a86c772416f90 |
Implicit Bias of Adversarial Training for Deep Neural Networks | 1 INTRODUCTION . Deep neural networks ( DNNs ) have achieved great success in many fields such as computer vision ( Krizhevsky et al. , 2012 ; He et al. , 2015 ) and natural language processing ( Collobert & Weston , 2008 ) and other applications . These breakthroughs also lead to the importance of research on the robustness and security issues of DNNs , among which those about adversarial examples are especially prevalent . Adversarial examples are obtained by adding craftily designed imperceptible perturbations to the original examples , which can sharply change the predictions of the DNNs with high confidence ( Szegedy et al. , 2014 ; Nguyen et al. , 2015 ) . Such vulnerability of DNNs renders security concerns about deploying them in security-critical systems including vision for autonomous cars and face recognition . It is therefore crucial to develop defense mechanisms against these adversarial examples for deep learning . To this end , many defense strategies have been proposed to make DNNs resistant to adversarial examples , such as adding a randomization layer before the input to the classifier ( Xie et al. , 2018 ) , input transformations ( Guo et al. , 2018 ) , adversarial training ( Madry et al. , 2018 ) , etc . However , Athalye et al . ( 2018 ) pointed out that most of the defense techniques are ineffective and give a false sense of security due to obfuscated gradients except for adversarial training—a method which has not been comprehensively attacked yet . Given a C-class training dataset { ( xi , yi ) } ni=1 with natural example xi ∈ Rd and corresponding label yi ∈ { 1 , ... , C } , adversarial training is formulated as solving the following minimax optimization problem min W L̃ ( W ) = min W 1 n n∑ i=1 max δi∈B ( 0 , ) ` ( xi + δi ( W ) , yi ; W ) , ( 1 ) where f is the DNN function , ` is the loss function and δi ( W ) is the adversarial perturbation generated by some adversary A ( xi , yi ; W ) typically depending on model parameters W , within the set B ( 0 , ) = { δ : ‖δ‖p ≤ } around the natural example xi . Commonly , adversarial training conducts a two-player game at each iteration : the inner maximization is to attack the model and find the corresponding perturbation of the original example that maximizes the classification loss ` ; the outer minimization , on the other hand , is to update the model parameters W with the gradient descent method such that the loss ` is minimized on adversarial examples generated by the inner maximization ( see Algorithm 1 for details ) . To have a better theoretical understanding of the fact that adversarial training empirically improves the robustness of DNNs against adversarial examples , Zhang et al . ( 2019 ) decomposed the prediction error for adversarial examples and identified a trade-off between robustness and accuracy while Li et al . ( 2020 ) studied the inductive bias of gradient descent based adversarial training for logistic regression on linearly separable data . Faghri et al . ( 2021 ) studied adversarial robustness of linear neural networks by exploring the optimization bias of different methods . Yu et al . ( 2021 ) studied adversarial training through the bias-variance decomposition and showed that its generalization error on clean examples mainly comes from the bias . However , even for the simplest DNN—the deep linear network , we notice that there exists no work to theoretically understand such robustness achieved by adversarial training through exploring its implicit bias . On the other hand , for the standard training , recent works extensively explored the implicit bias imposed by gradient descent or its variants for DNNs in different settings . For a simple setting of linear logistic regression on linearly separable data , Soudry et al . ( 2018 ) ; Ji & Telgarsky ( 2018 ) ; Nacson et al . ( 2019b ) derived that the direction of the model parameter converges to that of the max- ` 2-margin with divergent norm . Shamir ( 2021 ) concluded that gradient methods never overfit when training linear predictors over separable dataset . Viewing the above model as a single-layer network , a natural but more complicated extension is that for the deep linear network on linearly separable data : Ji & Telgarsky ( 2019 ) proved that the gradient descent aligns the weight matrices across layers and the product of weight matrices also converges to the direction of the max- ` 2-margin solution ; Gunasekar et al . ( 2018 ) showed the implicit bias of gradient descent on linear convolution networks . Nacson et al . ( 2019a ) ; Lyu & Li ( 2020 ) further promoted the study of implicit bias of gradient descent for standard training to the case of more general homogeneous non-linear neural networks and proved that the limit point of the optimization path is along the direction of a KKT point of the max-margin problem . Wei et al . ( 2019 ) studied the regularization path for homogeneous DNNs and also proved the convergence to the max-margin direction in this setting . Banburski et al . ( 2019 ) showed that gradient descent induces a dynamics of normalized weights converging to an equilibrium which corresponds to a minimal norm solution . It is therefore our goal in this paper to theoretically understand the resistance to adversarial examples for adversarially trained DNNs , linear and non-linear ones , through the lens of implicit bias imposed by adversarial training . Due to the inner maximization , adversarial training differs a lot from standard training and one should be careful to analyze the perturbed training dynamics . For the adversarial training objective Eq . ( 1 ) , various approaches have been proposed to solve the inner maximization , such as fast gradient sign method ( FGSM ( Goodfellow et al. , 2015 ) ) and its stronger version projected gradient descent ( PGD ( Madry et al. , 2018 ) ) . The widely used adversarial training adopts PGD to attack the model , while recent work ( Wong et al. , 2020 ) also suggested that a weaker adversary can also surprisingly yield a model with satisfying robustness . Thus to conduct a comprehensive study about the implicit bias of adversarial training for DNNs , we will use ` 2-fast gradient method ( FGM ( Miyato et al. , 2016 ) ) , ` ∞ FGSM , ` 2-PGD and ` ∞-PGD to solve the inner maximization of the adversarial training objective . 1.1 OUR CONTRIBUTION . In this paper , we devote to answering two questions . First , is there any implicit bias imposed by adversarial training for DNNs without explicit regularization ? Second , if there exists such an implicit bias , what are the convergence properties of the model parameters along the adversarial training trajectory ? To this end , we first investigate the adversarial training with ` 2 adversarial perturbations for deep linear networks on linear separable data where the allowed Euclidean distances from the adversarial examples to their corresponding original examples ‖x′i − xi‖2 are less than the max- ` 2-margin of the original dataset . Despite the simplicity of this setting , this problem is meaningful due to its non-convexity and the introduction of adversarial examples , which heavily depend on the model parameters , during the training process . We prove that gradient descent for adversarial training implicitly aligns weight matrices across layers and the direction of the product of weight matrices also surprisingly converges to that of the max- ` 2-margin solution of the original dataset—similar to that of standard training for deep linear network ( Ji & Telgarsky , 2019 ) . Our results significantly generalize those in Li et al . ( 2020 ) for adversarial training of logistic regression . This simple yet insightful case positively answers our first question but partially answers the second one because it still remains unclear why such convergence property can improve robustness considering its similarity to that for the standard training . In fact , adversarial training differs from standard training in the way how they impose such convergence property for parameters : the first one is to maximize the margin of adversarial examples while the latter is maximizing that of the original dataset , and these two optimization problems happen to possess solutions along the same direction . We then move forward to explore a more general situation , adversarial training for homogeneous non-linear DNNs without the linear separability of the dataset . We study the limit point of the normalized model parameters along the adversarial training trajectory and show that Theorem 1 ( Informal ) . When the deep neural network is adversarially trained with one of the ` 2-FGM , FGSM , ` 2-PGD and ` ∞-PGD perturbations , the limit point of the normalized model parameters is along the direction of a KKT point of a constrained optimization problem which aims to maximize the margin of adversarial examples . This indicates that adversarial training is implicitly maximizing the margin of adversarial examples rather than that of original dataset . Thus Theorem 1 provides another view for the high bias error on clean examples of adversarial training in Yu et al . ( 2021 ) since distributions of adversarial and clean examples are different . To the best of knowledge , these results are the first attempt to analyze the implicit bias of adversarial training for DNNs . We believe our results provide a theoretical understanding on the effectiveness of adversarial training for improving robustness against adversarial examples . It could potentially shed light on how to enhance the robustness of adversarially trained models or even further inspire more effective defense mechanisms . Organization This paper is organized as follows . Section 2 is about notations and settings . Section 3 presents our main results on the implicit bias of adversarial training for DNNs . Section 4 provides numerical experiments to support our claims . We conclude this work in Section 5 and discuss future directions . Some technical proofs are deferred to supplementary materials . Algorithm 1 Adversarial Training Input : Training set S = { ( xi , yi ) } ni=1 , Adversary A to solve the inner maximization , learning rate η , initialization Wk for k ∈ { 1 , . . . , L } for t = 0 to T − 1 do S′ ( t ) = ∅ for i = 1 to n do x′i ( t ) = A ( xi , yi , W ( t ) ) S′ ( t ) = S′ ( t ) ⋃ ( x′i ( t ) , yi ) end for for k = 1 to L do Wk ( t+ 1 ) = Wk ( t ) − η ( t ) ∂L̃ ( S ′ ( t ) ; W ) ∂Wk end for end for 2 PRELIMINARIES . Notations . For any matrix A ∈ Rm×n , we denote its i-th row j-th column entry by Aij . Let AT denote the transpose of A . ‖ · ‖F represents the Frobenius norm and ‖ · ‖p is the ` p norm . The training set is { ( xi , yi ) } ni=1 where xi ∈ Rd , ‖xi‖2 ≤ 1 and yi ∈ { 1 , −1 } . For a scalar function f : Rd 7→ R , we denote its gradient by∇f . Furthermore , tr ( A ) = ∑ iAii denotes the trace for the matrix A . We study the adversarial training for the L-layer positively homogeneous deep neural network f ( x ; W ) = WLϕL ( WL−1 · · ·ϕ3 ( W2ϕ2 ( W1x ) ) · · · ) ( 2 ) where Wk is the k-th layer weight matrix and ϕk is the activation function of the k-th layer 1 . The multi-c-homogeneity of the network is defined by f ( x ; a1W1 , · · · , aLWL ) = L∏ k=1 ackf ( x ; W ) ( 3 ) for any positive constants ak ’ s and c ≥ 1 , where W = ( W1 , · · · , WL ) is the collection of the parameters of the network . For example , deep ReLU networks are multi-1-homogeneous . For convenience , we also adopt the following notations for a multi-ck-homogeneous DNN : ρk = ‖Wk‖F , ρ = ρc1 · · · ρcL and f ( x ; W ) = ρf ( x ; Ŵ ) , ( 4 ) where Ŵk = Wk/‖Wk‖F with ‖Ŵk‖F = 1 for k ∈ { 1 , . . . , L } . We use δi ( W ) to represent the adversarial perturbation of the original example xi within the perturbation set B ( 0 , ) : { δ : ‖δ‖p ≤ } around the original example xi for f ( x ; W ) . Furthermore , we use the scale invariant adversarial perturbations defined as follows for adversarial training in this paper . Definition 1 ( Scale invariant adversarial perturbation ) . An adversarial perturbation is said to be a scale invariant adversarial perturbation for f ( xi ; W1 , . . . , WL ) and loss function ` if it satisfies δi ( a1W1 , . . . , aLWL ) = δi ( W1 , . . . , WL ) ( 5 ) for any positive constants ak ’ s . We will show in Section 3.2 that FGSM , ` 2-FGM , ` 2-PGD and ` ∞-PGD perturbations for homogeneous DNNs are all scale invariant perturbations , which are important for analyzing different types of perturbation in a unified manner . The empirical adversarial training loss with the perturbation δi ( W ) is given by L̃ ( W ) = 1 n n∑ i=1 ˜̀ ( yi , xi ; W ) = 1 n n∑ i=1 ` ( yif ( xi + δi ( W ) ; W ) ) . ( 6 ) For ease of notation , we denote ˜̀i ( W ) = ˜̀ ( yi , xi ; W ) . The loss function ` is continuously differentiable and satisfies Assumption 1 ( Loss function ) . ` > 0 , ` ′ < 0 , limx→∞ ` ( x ) = 0 and ` x→−∞ ( x ) =∞ . Many widely used loss functions satisfy the above assumption such as ` ( x ) = e−x and the logistic loss ` ( x ) = ln ( 1 + e−x ) . Furthermore , we make the following common assumptions about the smoothness of f ( x ; W ) and the adversarial perturbation δ ( W ) : Assumption 2 ( Smoothness ) . With respect to W , yf ( x ; W ) is locally Lipschitz for any fixed x ; yif ( xi ; W ) further have locally Lipschitz gradients and δi ( W ) are locally Lipschitz for all training examples xi . Remark . Our results can also be generalized to non-smooth homogeneous neural networks straightforwardly ( Appendix B.4 ) . Assuming Lipschitzness about perturbations is because we focus on popular perturbations such as ` 2-FGM and PGD perturbations which have explicit forms and depend on gradients of the network , whose Lipschitzness assumptions are quite common . | *Implicit Bias of Adversarial Training for Deep Neural Networks* explores how minimizing the exponential loss (i.e., $l(x) = e^{-x}$) of a homogeneous neural network (i.e., a neural network such that $$ f = a_L W_L \circ \sigma_L \circ \cdots \circ \sigma_2 \circ a_1 W_1 = \prod^L_{k=1} a_k^c (W_L \circ \sigma_L \circ \cdots \circ \sigma_2 \circ W_1 $$ for activation functions $\sigma_L, \ldots, \sigma_2$, weights $W_L, \ldots, W_1$, $a_L, \ldots, a_1 > 0$, and $c \geq 1$) on samples with perturbations that maximizing the loss influences the optimized neural network's weights. Specifically, this paper proves that, for an exponential loss and a multi-c-homogeneous neural network, the limit point for $\frac{W}{\lVert W \rVert}$ with respect to the gradient flow $$ \frac{dW}{dt} = - \left( \frac{d\tilde{\mathcal{L}}}{\partial W} \right)^T $$ of the adversarial training objective $$ \tilde{\mathcal{L}} = \frac{1}{n}\sum^n_{i=1}\ell(x_i + \delta_i(W), y_i) $$ under $\ell_2$-FGM, FGSM, $\ell_2$-PGD, and $\ell_\infty$-PGD is along the Karush-Kuhn-Tucker (KKT) point of the constrained minimization problem $$ \min_{W_1, \ldots, W_L} \frac{1}{2} \lVert W \rVert^2_{\ell_2} \text{ s.t. } \tilde{\gamma_i} \geq 1 $$ where $\tilde{\gamma_i} = y_i f(x_i + \delta_i(W))$ and $W = (W_1, \ldots, W_L)$. This theorem demonstrates that---for a class of neural networks and adversarial perturbations---adversarial training has an implicit bias that can be expressed in closed form. This result provides an important contribution to understanding how adversarial training improves adversarial robustness. | SP:688b96e1fced6cd3dbddb4f2a4921717fdd93108 |
Implicit Bias of Adversarial Training for Deep Neural Networks | 1 INTRODUCTION . Deep neural networks ( DNNs ) have achieved great success in many fields such as computer vision ( Krizhevsky et al. , 2012 ; He et al. , 2015 ) and natural language processing ( Collobert & Weston , 2008 ) and other applications . These breakthroughs also lead to the importance of research on the robustness and security issues of DNNs , among which those about adversarial examples are especially prevalent . Adversarial examples are obtained by adding craftily designed imperceptible perturbations to the original examples , which can sharply change the predictions of the DNNs with high confidence ( Szegedy et al. , 2014 ; Nguyen et al. , 2015 ) . Such vulnerability of DNNs renders security concerns about deploying them in security-critical systems including vision for autonomous cars and face recognition . It is therefore crucial to develop defense mechanisms against these adversarial examples for deep learning . To this end , many defense strategies have been proposed to make DNNs resistant to adversarial examples , such as adding a randomization layer before the input to the classifier ( Xie et al. , 2018 ) , input transformations ( Guo et al. , 2018 ) , adversarial training ( Madry et al. , 2018 ) , etc . However , Athalye et al . ( 2018 ) pointed out that most of the defense techniques are ineffective and give a false sense of security due to obfuscated gradients except for adversarial training—a method which has not been comprehensively attacked yet . Given a C-class training dataset { ( xi , yi ) } ni=1 with natural example xi ∈ Rd and corresponding label yi ∈ { 1 , ... , C } , adversarial training is formulated as solving the following minimax optimization problem min W L̃ ( W ) = min W 1 n n∑ i=1 max δi∈B ( 0 , ) ` ( xi + δi ( W ) , yi ; W ) , ( 1 ) where f is the DNN function , ` is the loss function and δi ( W ) is the adversarial perturbation generated by some adversary A ( xi , yi ; W ) typically depending on model parameters W , within the set B ( 0 , ) = { δ : ‖δ‖p ≤ } around the natural example xi . Commonly , adversarial training conducts a two-player game at each iteration : the inner maximization is to attack the model and find the corresponding perturbation of the original example that maximizes the classification loss ` ; the outer minimization , on the other hand , is to update the model parameters W with the gradient descent method such that the loss ` is minimized on adversarial examples generated by the inner maximization ( see Algorithm 1 for details ) . To have a better theoretical understanding of the fact that adversarial training empirically improves the robustness of DNNs against adversarial examples , Zhang et al . ( 2019 ) decomposed the prediction error for adversarial examples and identified a trade-off between robustness and accuracy while Li et al . ( 2020 ) studied the inductive bias of gradient descent based adversarial training for logistic regression on linearly separable data . Faghri et al . ( 2021 ) studied adversarial robustness of linear neural networks by exploring the optimization bias of different methods . Yu et al . ( 2021 ) studied adversarial training through the bias-variance decomposition and showed that its generalization error on clean examples mainly comes from the bias . However , even for the simplest DNN—the deep linear network , we notice that there exists no work to theoretically understand such robustness achieved by adversarial training through exploring its implicit bias . On the other hand , for the standard training , recent works extensively explored the implicit bias imposed by gradient descent or its variants for DNNs in different settings . For a simple setting of linear logistic regression on linearly separable data , Soudry et al . ( 2018 ) ; Ji & Telgarsky ( 2018 ) ; Nacson et al . ( 2019b ) derived that the direction of the model parameter converges to that of the max- ` 2-margin with divergent norm . Shamir ( 2021 ) concluded that gradient methods never overfit when training linear predictors over separable dataset . Viewing the above model as a single-layer network , a natural but more complicated extension is that for the deep linear network on linearly separable data : Ji & Telgarsky ( 2019 ) proved that the gradient descent aligns the weight matrices across layers and the product of weight matrices also converges to the direction of the max- ` 2-margin solution ; Gunasekar et al . ( 2018 ) showed the implicit bias of gradient descent on linear convolution networks . Nacson et al . ( 2019a ) ; Lyu & Li ( 2020 ) further promoted the study of implicit bias of gradient descent for standard training to the case of more general homogeneous non-linear neural networks and proved that the limit point of the optimization path is along the direction of a KKT point of the max-margin problem . Wei et al . ( 2019 ) studied the regularization path for homogeneous DNNs and also proved the convergence to the max-margin direction in this setting . Banburski et al . ( 2019 ) showed that gradient descent induces a dynamics of normalized weights converging to an equilibrium which corresponds to a minimal norm solution . It is therefore our goal in this paper to theoretically understand the resistance to adversarial examples for adversarially trained DNNs , linear and non-linear ones , through the lens of implicit bias imposed by adversarial training . Due to the inner maximization , adversarial training differs a lot from standard training and one should be careful to analyze the perturbed training dynamics . For the adversarial training objective Eq . ( 1 ) , various approaches have been proposed to solve the inner maximization , such as fast gradient sign method ( FGSM ( Goodfellow et al. , 2015 ) ) and its stronger version projected gradient descent ( PGD ( Madry et al. , 2018 ) ) . The widely used adversarial training adopts PGD to attack the model , while recent work ( Wong et al. , 2020 ) also suggested that a weaker adversary can also surprisingly yield a model with satisfying robustness . Thus to conduct a comprehensive study about the implicit bias of adversarial training for DNNs , we will use ` 2-fast gradient method ( FGM ( Miyato et al. , 2016 ) ) , ` ∞ FGSM , ` 2-PGD and ` ∞-PGD to solve the inner maximization of the adversarial training objective . 1.1 OUR CONTRIBUTION . In this paper , we devote to answering two questions . First , is there any implicit bias imposed by adversarial training for DNNs without explicit regularization ? Second , if there exists such an implicit bias , what are the convergence properties of the model parameters along the adversarial training trajectory ? To this end , we first investigate the adversarial training with ` 2 adversarial perturbations for deep linear networks on linear separable data where the allowed Euclidean distances from the adversarial examples to their corresponding original examples ‖x′i − xi‖2 are less than the max- ` 2-margin of the original dataset . Despite the simplicity of this setting , this problem is meaningful due to its non-convexity and the introduction of adversarial examples , which heavily depend on the model parameters , during the training process . We prove that gradient descent for adversarial training implicitly aligns weight matrices across layers and the direction of the product of weight matrices also surprisingly converges to that of the max- ` 2-margin solution of the original dataset—similar to that of standard training for deep linear network ( Ji & Telgarsky , 2019 ) . Our results significantly generalize those in Li et al . ( 2020 ) for adversarial training of logistic regression . This simple yet insightful case positively answers our first question but partially answers the second one because it still remains unclear why such convergence property can improve robustness considering its similarity to that for the standard training . In fact , adversarial training differs from standard training in the way how they impose such convergence property for parameters : the first one is to maximize the margin of adversarial examples while the latter is maximizing that of the original dataset , and these two optimization problems happen to possess solutions along the same direction . We then move forward to explore a more general situation , adversarial training for homogeneous non-linear DNNs without the linear separability of the dataset . We study the limit point of the normalized model parameters along the adversarial training trajectory and show that Theorem 1 ( Informal ) . When the deep neural network is adversarially trained with one of the ` 2-FGM , FGSM , ` 2-PGD and ` ∞-PGD perturbations , the limit point of the normalized model parameters is along the direction of a KKT point of a constrained optimization problem which aims to maximize the margin of adversarial examples . This indicates that adversarial training is implicitly maximizing the margin of adversarial examples rather than that of original dataset . Thus Theorem 1 provides another view for the high bias error on clean examples of adversarial training in Yu et al . ( 2021 ) since distributions of adversarial and clean examples are different . To the best of knowledge , these results are the first attempt to analyze the implicit bias of adversarial training for DNNs . We believe our results provide a theoretical understanding on the effectiveness of adversarial training for improving robustness against adversarial examples . It could potentially shed light on how to enhance the robustness of adversarially trained models or even further inspire more effective defense mechanisms . Organization This paper is organized as follows . Section 2 is about notations and settings . Section 3 presents our main results on the implicit bias of adversarial training for DNNs . Section 4 provides numerical experiments to support our claims . We conclude this work in Section 5 and discuss future directions . Some technical proofs are deferred to supplementary materials . Algorithm 1 Adversarial Training Input : Training set S = { ( xi , yi ) } ni=1 , Adversary A to solve the inner maximization , learning rate η , initialization Wk for k ∈ { 1 , . . . , L } for t = 0 to T − 1 do S′ ( t ) = ∅ for i = 1 to n do x′i ( t ) = A ( xi , yi , W ( t ) ) S′ ( t ) = S′ ( t ) ⋃ ( x′i ( t ) , yi ) end for for k = 1 to L do Wk ( t+ 1 ) = Wk ( t ) − η ( t ) ∂L̃ ( S ′ ( t ) ; W ) ∂Wk end for end for 2 PRELIMINARIES . Notations . For any matrix A ∈ Rm×n , we denote its i-th row j-th column entry by Aij . Let AT denote the transpose of A . ‖ · ‖F represents the Frobenius norm and ‖ · ‖p is the ` p norm . The training set is { ( xi , yi ) } ni=1 where xi ∈ Rd , ‖xi‖2 ≤ 1 and yi ∈ { 1 , −1 } . For a scalar function f : Rd 7→ R , we denote its gradient by∇f . Furthermore , tr ( A ) = ∑ iAii denotes the trace for the matrix A . We study the adversarial training for the L-layer positively homogeneous deep neural network f ( x ; W ) = WLϕL ( WL−1 · · ·ϕ3 ( W2ϕ2 ( W1x ) ) · · · ) ( 2 ) where Wk is the k-th layer weight matrix and ϕk is the activation function of the k-th layer 1 . The multi-c-homogeneity of the network is defined by f ( x ; a1W1 , · · · , aLWL ) = L∏ k=1 ackf ( x ; W ) ( 3 ) for any positive constants ak ’ s and c ≥ 1 , where W = ( W1 , · · · , WL ) is the collection of the parameters of the network . For example , deep ReLU networks are multi-1-homogeneous . For convenience , we also adopt the following notations for a multi-ck-homogeneous DNN : ρk = ‖Wk‖F , ρ = ρc1 · · · ρcL and f ( x ; W ) = ρf ( x ; Ŵ ) , ( 4 ) where Ŵk = Wk/‖Wk‖F with ‖Ŵk‖F = 1 for k ∈ { 1 , . . . , L } . We use δi ( W ) to represent the adversarial perturbation of the original example xi within the perturbation set B ( 0 , ) : { δ : ‖δ‖p ≤ } around the original example xi for f ( x ; W ) . Furthermore , we use the scale invariant adversarial perturbations defined as follows for adversarial training in this paper . Definition 1 ( Scale invariant adversarial perturbation ) . An adversarial perturbation is said to be a scale invariant adversarial perturbation for f ( xi ; W1 , . . . , WL ) and loss function ` if it satisfies δi ( a1W1 , . . . , aLWL ) = δi ( W1 , . . . , WL ) ( 5 ) for any positive constants ak ’ s . We will show in Section 3.2 that FGSM , ` 2-FGM , ` 2-PGD and ` ∞-PGD perturbations for homogeneous DNNs are all scale invariant perturbations , which are important for analyzing different types of perturbation in a unified manner . The empirical adversarial training loss with the perturbation δi ( W ) is given by L̃ ( W ) = 1 n n∑ i=1 ˜̀ ( yi , xi ; W ) = 1 n n∑ i=1 ` ( yif ( xi + δi ( W ) ; W ) ) . ( 6 ) For ease of notation , we denote ˜̀i ( W ) = ˜̀ ( yi , xi ; W ) . The loss function ` is continuously differentiable and satisfies Assumption 1 ( Loss function ) . ` > 0 , ` ′ < 0 , limx→∞ ` ( x ) = 0 and ` x→−∞ ( x ) =∞ . Many widely used loss functions satisfy the above assumption such as ` ( x ) = e−x and the logistic loss ` ( x ) = ln ( 1 + e−x ) . Furthermore , we make the following common assumptions about the smoothness of f ( x ; W ) and the adversarial perturbation δ ( W ) : Assumption 2 ( Smoothness ) . With respect to W , yf ( x ; W ) is locally Lipschitz for any fixed x ; yif ( xi ; W ) further have locally Lipschitz gradients and δi ( W ) are locally Lipschitz for all training examples xi . Remark . Our results can also be generalized to non-smooth homogeneous neural networks straightforwardly ( Appendix B.4 ) . Assuming Lipschitzness about perturbations is because we focus on popular perturbations such as ` 2-FGM and PGD perturbations which have explicit forms and depend on gradients of the network , whose Lipschitzness assumptions are quite common . | The paper studies the adversarial training problem under deep linear network classifiers and standard L_2 and L_\infty perturbations. The paper's main result suggests that in the linearly separable case the adversarially trained model via gradient descent will asymptotically converge to the max-margin solution. Some extensions of this result to homogenous neural networks with exponential loss function have been provided. The paper also performs some preliminary numerical experiments to support the theoretical results. | SP:688b96e1fced6cd3dbddb4f2a4921717fdd93108 |
Implicit Bias of Adversarial Training for Deep Neural Networks | 1 INTRODUCTION . Deep neural networks ( DNNs ) have achieved great success in many fields such as computer vision ( Krizhevsky et al. , 2012 ; He et al. , 2015 ) and natural language processing ( Collobert & Weston , 2008 ) and other applications . These breakthroughs also lead to the importance of research on the robustness and security issues of DNNs , among which those about adversarial examples are especially prevalent . Adversarial examples are obtained by adding craftily designed imperceptible perturbations to the original examples , which can sharply change the predictions of the DNNs with high confidence ( Szegedy et al. , 2014 ; Nguyen et al. , 2015 ) . Such vulnerability of DNNs renders security concerns about deploying them in security-critical systems including vision for autonomous cars and face recognition . It is therefore crucial to develop defense mechanisms against these adversarial examples for deep learning . To this end , many defense strategies have been proposed to make DNNs resistant to adversarial examples , such as adding a randomization layer before the input to the classifier ( Xie et al. , 2018 ) , input transformations ( Guo et al. , 2018 ) , adversarial training ( Madry et al. , 2018 ) , etc . However , Athalye et al . ( 2018 ) pointed out that most of the defense techniques are ineffective and give a false sense of security due to obfuscated gradients except for adversarial training—a method which has not been comprehensively attacked yet . Given a C-class training dataset { ( xi , yi ) } ni=1 with natural example xi ∈ Rd and corresponding label yi ∈ { 1 , ... , C } , adversarial training is formulated as solving the following minimax optimization problem min W L̃ ( W ) = min W 1 n n∑ i=1 max δi∈B ( 0 , ) ` ( xi + δi ( W ) , yi ; W ) , ( 1 ) where f is the DNN function , ` is the loss function and δi ( W ) is the adversarial perturbation generated by some adversary A ( xi , yi ; W ) typically depending on model parameters W , within the set B ( 0 , ) = { δ : ‖δ‖p ≤ } around the natural example xi . Commonly , adversarial training conducts a two-player game at each iteration : the inner maximization is to attack the model and find the corresponding perturbation of the original example that maximizes the classification loss ` ; the outer minimization , on the other hand , is to update the model parameters W with the gradient descent method such that the loss ` is minimized on adversarial examples generated by the inner maximization ( see Algorithm 1 for details ) . To have a better theoretical understanding of the fact that adversarial training empirically improves the robustness of DNNs against adversarial examples , Zhang et al . ( 2019 ) decomposed the prediction error for adversarial examples and identified a trade-off between robustness and accuracy while Li et al . ( 2020 ) studied the inductive bias of gradient descent based adversarial training for logistic regression on linearly separable data . Faghri et al . ( 2021 ) studied adversarial robustness of linear neural networks by exploring the optimization bias of different methods . Yu et al . ( 2021 ) studied adversarial training through the bias-variance decomposition and showed that its generalization error on clean examples mainly comes from the bias . However , even for the simplest DNN—the deep linear network , we notice that there exists no work to theoretically understand such robustness achieved by adversarial training through exploring its implicit bias . On the other hand , for the standard training , recent works extensively explored the implicit bias imposed by gradient descent or its variants for DNNs in different settings . For a simple setting of linear logistic regression on linearly separable data , Soudry et al . ( 2018 ) ; Ji & Telgarsky ( 2018 ) ; Nacson et al . ( 2019b ) derived that the direction of the model parameter converges to that of the max- ` 2-margin with divergent norm . Shamir ( 2021 ) concluded that gradient methods never overfit when training linear predictors over separable dataset . Viewing the above model as a single-layer network , a natural but more complicated extension is that for the deep linear network on linearly separable data : Ji & Telgarsky ( 2019 ) proved that the gradient descent aligns the weight matrices across layers and the product of weight matrices also converges to the direction of the max- ` 2-margin solution ; Gunasekar et al . ( 2018 ) showed the implicit bias of gradient descent on linear convolution networks . Nacson et al . ( 2019a ) ; Lyu & Li ( 2020 ) further promoted the study of implicit bias of gradient descent for standard training to the case of more general homogeneous non-linear neural networks and proved that the limit point of the optimization path is along the direction of a KKT point of the max-margin problem . Wei et al . ( 2019 ) studied the regularization path for homogeneous DNNs and also proved the convergence to the max-margin direction in this setting . Banburski et al . ( 2019 ) showed that gradient descent induces a dynamics of normalized weights converging to an equilibrium which corresponds to a minimal norm solution . It is therefore our goal in this paper to theoretically understand the resistance to adversarial examples for adversarially trained DNNs , linear and non-linear ones , through the lens of implicit bias imposed by adversarial training . Due to the inner maximization , adversarial training differs a lot from standard training and one should be careful to analyze the perturbed training dynamics . For the adversarial training objective Eq . ( 1 ) , various approaches have been proposed to solve the inner maximization , such as fast gradient sign method ( FGSM ( Goodfellow et al. , 2015 ) ) and its stronger version projected gradient descent ( PGD ( Madry et al. , 2018 ) ) . The widely used adversarial training adopts PGD to attack the model , while recent work ( Wong et al. , 2020 ) also suggested that a weaker adversary can also surprisingly yield a model with satisfying robustness . Thus to conduct a comprehensive study about the implicit bias of adversarial training for DNNs , we will use ` 2-fast gradient method ( FGM ( Miyato et al. , 2016 ) ) , ` ∞ FGSM , ` 2-PGD and ` ∞-PGD to solve the inner maximization of the adversarial training objective . 1.1 OUR CONTRIBUTION . In this paper , we devote to answering two questions . First , is there any implicit bias imposed by adversarial training for DNNs without explicit regularization ? Second , if there exists such an implicit bias , what are the convergence properties of the model parameters along the adversarial training trajectory ? To this end , we first investigate the adversarial training with ` 2 adversarial perturbations for deep linear networks on linear separable data where the allowed Euclidean distances from the adversarial examples to their corresponding original examples ‖x′i − xi‖2 are less than the max- ` 2-margin of the original dataset . Despite the simplicity of this setting , this problem is meaningful due to its non-convexity and the introduction of adversarial examples , which heavily depend on the model parameters , during the training process . We prove that gradient descent for adversarial training implicitly aligns weight matrices across layers and the direction of the product of weight matrices also surprisingly converges to that of the max- ` 2-margin solution of the original dataset—similar to that of standard training for deep linear network ( Ji & Telgarsky , 2019 ) . Our results significantly generalize those in Li et al . ( 2020 ) for adversarial training of logistic regression . This simple yet insightful case positively answers our first question but partially answers the second one because it still remains unclear why such convergence property can improve robustness considering its similarity to that for the standard training . In fact , adversarial training differs from standard training in the way how they impose such convergence property for parameters : the first one is to maximize the margin of adversarial examples while the latter is maximizing that of the original dataset , and these two optimization problems happen to possess solutions along the same direction . We then move forward to explore a more general situation , adversarial training for homogeneous non-linear DNNs without the linear separability of the dataset . We study the limit point of the normalized model parameters along the adversarial training trajectory and show that Theorem 1 ( Informal ) . When the deep neural network is adversarially trained with one of the ` 2-FGM , FGSM , ` 2-PGD and ` ∞-PGD perturbations , the limit point of the normalized model parameters is along the direction of a KKT point of a constrained optimization problem which aims to maximize the margin of adversarial examples . This indicates that adversarial training is implicitly maximizing the margin of adversarial examples rather than that of original dataset . Thus Theorem 1 provides another view for the high bias error on clean examples of adversarial training in Yu et al . ( 2021 ) since distributions of adversarial and clean examples are different . To the best of knowledge , these results are the first attempt to analyze the implicit bias of adversarial training for DNNs . We believe our results provide a theoretical understanding on the effectiveness of adversarial training for improving robustness against adversarial examples . It could potentially shed light on how to enhance the robustness of adversarially trained models or even further inspire more effective defense mechanisms . Organization This paper is organized as follows . Section 2 is about notations and settings . Section 3 presents our main results on the implicit bias of adversarial training for DNNs . Section 4 provides numerical experiments to support our claims . We conclude this work in Section 5 and discuss future directions . Some technical proofs are deferred to supplementary materials . Algorithm 1 Adversarial Training Input : Training set S = { ( xi , yi ) } ni=1 , Adversary A to solve the inner maximization , learning rate η , initialization Wk for k ∈ { 1 , . . . , L } for t = 0 to T − 1 do S′ ( t ) = ∅ for i = 1 to n do x′i ( t ) = A ( xi , yi , W ( t ) ) S′ ( t ) = S′ ( t ) ⋃ ( x′i ( t ) , yi ) end for for k = 1 to L do Wk ( t+ 1 ) = Wk ( t ) − η ( t ) ∂L̃ ( S ′ ( t ) ; W ) ∂Wk end for end for 2 PRELIMINARIES . Notations . For any matrix A ∈ Rm×n , we denote its i-th row j-th column entry by Aij . Let AT denote the transpose of A . ‖ · ‖F represents the Frobenius norm and ‖ · ‖p is the ` p norm . The training set is { ( xi , yi ) } ni=1 where xi ∈ Rd , ‖xi‖2 ≤ 1 and yi ∈ { 1 , −1 } . For a scalar function f : Rd 7→ R , we denote its gradient by∇f . Furthermore , tr ( A ) = ∑ iAii denotes the trace for the matrix A . We study the adversarial training for the L-layer positively homogeneous deep neural network f ( x ; W ) = WLϕL ( WL−1 · · ·ϕ3 ( W2ϕ2 ( W1x ) ) · · · ) ( 2 ) where Wk is the k-th layer weight matrix and ϕk is the activation function of the k-th layer 1 . The multi-c-homogeneity of the network is defined by f ( x ; a1W1 , · · · , aLWL ) = L∏ k=1 ackf ( x ; W ) ( 3 ) for any positive constants ak ’ s and c ≥ 1 , where W = ( W1 , · · · , WL ) is the collection of the parameters of the network . For example , deep ReLU networks are multi-1-homogeneous . For convenience , we also adopt the following notations for a multi-ck-homogeneous DNN : ρk = ‖Wk‖F , ρ = ρc1 · · · ρcL and f ( x ; W ) = ρf ( x ; Ŵ ) , ( 4 ) where Ŵk = Wk/‖Wk‖F with ‖Ŵk‖F = 1 for k ∈ { 1 , . . . , L } . We use δi ( W ) to represent the adversarial perturbation of the original example xi within the perturbation set B ( 0 , ) : { δ : ‖δ‖p ≤ } around the original example xi for f ( x ; W ) . Furthermore , we use the scale invariant adversarial perturbations defined as follows for adversarial training in this paper . Definition 1 ( Scale invariant adversarial perturbation ) . An adversarial perturbation is said to be a scale invariant adversarial perturbation for f ( xi ; W1 , . . . , WL ) and loss function ` if it satisfies δi ( a1W1 , . . . , aLWL ) = δi ( W1 , . . . , WL ) ( 5 ) for any positive constants ak ’ s . We will show in Section 3.2 that FGSM , ` 2-FGM , ` 2-PGD and ` ∞-PGD perturbations for homogeneous DNNs are all scale invariant perturbations , which are important for analyzing different types of perturbation in a unified manner . The empirical adversarial training loss with the perturbation δi ( W ) is given by L̃ ( W ) = 1 n n∑ i=1 ˜̀ ( yi , xi ; W ) = 1 n n∑ i=1 ` ( yif ( xi + δi ( W ) ; W ) ) . ( 6 ) For ease of notation , we denote ˜̀i ( W ) = ˜̀ ( yi , xi ; W ) . The loss function ` is continuously differentiable and satisfies Assumption 1 ( Loss function ) . ` > 0 , ` ′ < 0 , limx→∞ ` ( x ) = 0 and ` x→−∞ ( x ) =∞ . Many widely used loss functions satisfy the above assumption such as ` ( x ) = e−x and the logistic loss ` ( x ) = ln ( 1 + e−x ) . Furthermore , we make the following common assumptions about the smoothness of f ( x ; W ) and the adversarial perturbation δ ( W ) : Assumption 2 ( Smoothness ) . With respect to W , yf ( x ; W ) is locally Lipschitz for any fixed x ; yif ( xi ; W ) further have locally Lipschitz gradients and δi ( W ) are locally Lipschitz for all training examples xi . Remark . Our results can also be generalized to non-smooth homogeneous neural networks straightforwardly ( Appendix B.4 ) . Assuming Lipschitzness about perturbations is because we focus on popular perturbations such as ` 2-FGM and PGD perturbations which have explicit forms and depend on gradients of the network , whose Lipschitzness assumptions are quite common . | This paper aims to understand the training results of adversarial training, and proves that under certain conditions, adversarial training results maximize the margin for the adversarial training samples. Similar results have been observed in the cleaning training of DNN, this paper's contribution is to extend them to the adversarial training settings. The paper seems technically sound (although I don't have the time to go over all the appendix). The results, to be honest, are not surprising, given previous works on standard training. But I believe the rigorous justification presented in the paper is of importance. | SP:688b96e1fced6cd3dbddb4f2a4921717fdd93108 |
Distribution Compression in Near-Linear Time | √ n points with Õ ( 1/ √ n ) distributional discrepancy to P. Unfortu- nately , these same algorithms suffer from quadratic or super-quadratic runtime in the sample size n. To address this deficiency , we introduce a simple metaprocedure—Compress++—for speeding up any input thinning algorithm while suffering at most a factor of four in error . When combined with the quadratictime kernel halving and kernel thinning algorithms of Dwivedi and Mackey ( 2021 ) , Compress++ delivers √ n points with O ( √ log n/n ) integration error and better-than-Monte-Carlo maximum mean discrepancy in O ( n log2 n ) time and O ( √ n log2 ( n ) ) space . Moreover , Compress++ enjoys the same near-linear runtime given any quadratic-time input and reduces the runtime of super-quadratic algorithms by a square-root factor . In our benchmarks with high-dimensional Monte Carlo samples and long-running Markov chains targeting challenging differential equation posteriors , Compress++ matches or nearly matches the accuracy of its input algorithm in orders of magnitude less time . 1 INTRODUCTION . Distribution compression—constructing a concise summary of a probability distribution—is at the heart of many learning and inference tasks . For example , in Monte Carlo integration and Bayesian inference , n representative points are sampled i.i.d . or from a Markov chain to approximate expectations and quantify uncertainty under an intractable ( posterior ) distribution ( Robert & Casella , 1999 ) . However , these standard sampling strategies represent a bottleneck in computationally-demanding settings due to their slow root-n Monte Carlo error rate . For instance , the Monte Carlo estimate Pnf , 1n ∑n i=1 f ( xi ) of an unknown expectation Pf , EX∼P [ f ( X ) ] based on n i.i.d . points has Θ ( n− 1 2 ) integration error |Pnf − Pf | , requiring n = 10000 points for 1 % relative error and n = 106 points for 0.1 % error . Such bloated sample representations preclude downstream applications with critically expensive function evaluations like computational cardiology , where a 1000-CPU-hour tissue or organ simulation is required for each sample point ( Niederer et al. , 2011 ; Augustin et al. , 2016 ; Strocchi et al. , 2020 ) , or expert data annotation which can be monetarily expensive Beaugnon et al . ( 2017 ) , and is sometimes coined as the human bottleneck of machine learning . To restore the feasibility of such critically expensive tasks , it is common to thin down the initial sequence of points to a produce a much smaller coreset . The standard thinning approach , select every t-th sample point ( Owen , 2017 ) , is simple to implement but often leads to an substantial increase in error : e.g. , standard thinning n points from a fast-mixing Markov chain yields Ω ( n− 1 4 ) error when n 1 2 points are returned . Recently , Dwivedi & Mackey ( 2021b ; a ) introduced a more effective alternative , kernel thinning , that provides near optimal Õ ( n− 12 ) error when compressing n points down to size n 1 2 . While practical for moderate sample sizes , the runtime of this algorithm scales quadratically with the input size n , making its execution prohibitive for very large input sizes . Our goal is to significantly improve the runtime of such compression algorithms while providing comparable error guarantees . Problem setup Given a sequence Sin of n input points summarizing a target distribution P , our aim is to identify a high quality coreset Sout of size √ n in time nearly linear in n. We measure coreset quality via its integration error |Pf −Poutf | , |Pf − 1|Sout| ∑ x∈Sout f ( x ) | for functions f in the reproducing kernel Hilbert space ( RKHS ) H induced by a given kernel k ( Berlinet & Thomas-Agnan , 2011 ) . We consider both single function error and kernel maximum mean discrepancy ( MMD , Gretton et al. , 2012 ) , the worst-case integration error over the unit RKHS norm ball : MMDk ( P , Pout ) , sup‖f‖k≤1|Pf − Poutf | = ‖Pk− Poutk‖k . Our contributions We introduce a new simple meta procedure—COMPRESS++—that significantly speeds up a generic thinning algorithm while simultaneously inheriting the error guarantees of its input up to a factor of four . A direct application of COMPRESS++ to kernel thinning improves its quadratic O ( n2 ) runtime to near linear O ( n log2 n ) time while maintaining the error guarantees up to a factor four . Since the Õ ( n− 12 ) KT MMD guarantees of Dwivedi & Mackey ( 2021b ) match the Ω ( n− 1 2 ) minimax lower bounds of Tolstikhin et al . ( 2017 ) ; Phillips & Tai ( 2020 ) up to factors of √ log ( n ) and constants depending on d , KT-COMPRESS++ also provides near-optimal MMD compression for a wide range of kernels and distributions P. Moreover , the practical gains from applying COMPRESS++ are substantial : KT thins 65 , 000 points in 10 dimensions in 20m , while KTCOMPRESS++ needs only 1.5m ; KT takes more than a day to thin 250 , 000 points in 100 dimensions , while KT-COMPRESS++ takes less than an hour ( a 32× speed-up ) . COMPRESS++ can also be directly combined with any thinning algorithm , even those that have suboptimal guarantees but often perform well in practice , like kernel herding ( Chen et al. , 2010 ) , support points ( Mak & Joseph , 2018 ) , Stein points ( MCMC ) ( Chen et al. , 2018 ; 2019 ) , and Stein thinning ( Riabiz et al. , 2020a ) , all of which run in Ω ( n2 ) time . As a demonstration , we combine COMPRESS++ with the popular kernel herding algorithm and observe 10-60× speed-ups . In all of our experiments , COMPRESS++ leads to minimal loss in accuracy and , surprisingly , even improves upon herding accuracy for lower-dimensional problems . Overview of COMPRESS++ To define COMPRESS++ , we first introduce COMPRESS : A simple and elegant recursive strategy takes in a halving algorithm and provides an intermediate thinned coreset in significantly faster runtime . COMPRESS divides the coreset into four parts , recursively applies COMPRESS to each , combines the output of each and halves the resulting coreset to get the output . Then COMPRESS++ with a thinning algorithm ALG is defined as follows : In stage one , use COMPRESS with ALG instantiated for 2-thinning , to obtain an intermediate coreset of size 2g √ n for a suitable parameter g , and then in stage two apply ALG directly to further thin it down to √ n points . In this manner , ALG is ever applied to coresets of size 2g √ n or smaller thereby improving the runtime . The parameter g is chosen based on the run-time and known guarantees of the underlying thinning algorithm , but a default choice of 4 can be shown to be optimal in several theoretical settings , and it provided competitive performance across all our experiments . Overall , with n input points and a thinning algorithm with runtime nα , COMPRESS++ uses a outputs √ n points in time Õ ( nα/2 ) with errors similar to those provided by the input thinning algorithm . Notation We write SALG for the coreset outputted by an algorithm ALG and extend our MMD definition for point sequences ( S1 , S2 ) with empirical distributions ( Q1 , Q2 ) via MMDk ( S1 , S2 ) , MMDk ( Q1 , Q2 ) and MMDk ( P , S1 ) , MMDk ( P , Q1 ) . We use a - b and a % b to mean a = O ( b ) and a = Ω ( b ) and use -k to denote that the underlying constants depend on k. Basic definitions We use H to denote an inner-product space endowed with the inner-product 〈· , ·〉H . We start with the definition of sub-gamma random variables ( Boucheron et al. , 2013 ) . Definition 1 ( f -Sub-Gamma ) A random variable X is said to be sub-gamma on the right with parameters ( σ2 , c ) denoted by Γ+ ( σ2 , c ) if for all 0 < λ < 1c , we have E [ exp ( λX ) ] ≤ exp ( λ2σ2 2 ( 1−cλ ) ) . For any f ∈ H , we say that a random variable G on H is f -sub-gamma on the right , denoted by Γf+ ( σ2 , c ) , if the random variable 〈f , G〉H ∈ Γ+ ( σ2 , c ) . As a reminder , we note that c = 0 yields a sub-Gaussian tails on the right . For X ∈ Γ+ ( σ2 , c ) , Boucheron et al . ( 2013 , Section 2.4 ) shows that for t ≥ 0 , and δ ∈ ( 0 , 1 ] , we have P [ X > √ 2σ2t+ ct ] ≤ e−t , or equivalently P [ X ≤ √ 2σ2 log ( 1δ ) + c log ( 1 δ ) ] ≥ 1− δ . ( 1 ) A property that we frequently use is that the sub-gamma property is suitably closed under multiplication and addition of sub-gamma random variables . A discussion is deferred to App . A . Definition 2 ( Thinning and halving algorithms ) Consider an algorithm ALG that takes as input a point sequence Sin of size n and returns a ( possibly random ) subsequence SALG ⊂ Sin of size nout . We say that ALG is αn-thinning if nout = n/αn whenever n/αn ∈ N. When αn = 2 , we say that ALG is a Halving algorithm , and when αn = √ n , we say ALG is root-thinning . For a thinning algorithm ALG , we associate a kernel discrepancy embedding that measures the approximation quality for the n input points Sin provided by the nout points SALG output by ALG : ψALG ( Sin ) , ∑ x∈Sin k ( x , · ) − n nout ∑ x∈SALG k ( x , · ) . ( 2 ) Letting PS denote the empirical distribution of S , the reproducing property of k yieldsthat PSinf − PSALGf = 1n 〈f , ψALG〉k , for any f ∈ H , and MMD ( Sin , SALG ) = 1 n‖ψALG‖k . ( 3 ) Next , we define the notion of a sub-gamma thinning algorithm via the object ψALG ( 2 ) . Definition 3 ( Sub-Gamma Thinning Algorithm ) Given the functions f ∈ H , σ : N→ R+ and c : N→ R+ , an αn-thinning algorithm ALG is called f -sub-gamma on the right with parameters ( σ2 , c ) , if ψALG ( Sin ) ∈ Γf+ ( σ2 ( n ) , c ( n ) ) conditioned on Sin of size n. When the algorithm is both Γf+ and Γ −f + -sub-gamma , ( 1 ) and ( 3 ) immmediately imply a high probability tail bound on the integration error |PSinf − PSALGf | . 2 COMPRESS . We now introduce our first meta procedure COMPRESS ( which is also a building block for COMPRESS++ ) . COMPRESS takes as input a halving algorithm HALVE and a oversampling factor g , and then given any input of size n , it outputs at thinned coreset of size 2g √ n. The algorithm is extremely simple to implement : first , divide the input sequence into four subsequences of size n4 ( in any manner the user chooses ) ; second , recursively call COMPRESS on each subsequence to produce four coresets of size 2g−1 √ n ; finally , call HALVE on the concatenation of those coresets to produce the final output of size 2g √ n. We denote this algorithm as COMPRESS ( HALVE , g ) , and describe it formally in Alg . 1 . COMPRESS can also be implemented in a streaming fashion but we defer the discussion to App . G. We remind that COMPRESS with g = 0 is a root thinning algorithm . Algorithm 1 : COMPRESS Input : Halving algorithm HALVE , oversampling factor g , point sequence Sin of size n if n = 4g then return Sin else Partition Sin into four subsets { Si } 4i=1 each of size n/4 for i = 1 , 2 , 3 , 4 do S̃i ← COMPRESS ( Si , HALVE , g ) // return coresets of size 2g · √ n 4 end S̃ ← CONCATENATE ( S̃1 , S̃2 , S̃3 , S̃4 ) // coreset of size 2 · 2g · √ n S ← HALVE ( k , S̃ ) // coreset of size 2g √ n return S end | This paper gives a meta algorithm for speeding up coreset constructing algorithms for the distribution compression problem. The benefit of this meta algorithm is that its running time is faster by a square-root factor (e.g. quadratic to linear) while keeping the error rate roughly the same: only a factor of 4 worse. The method is very simple: split the input into four pieces of size n/4 each, recursively solve each of them, then combine all four answers into a set and take a coreset of this set. The speed-up in running time is immediate (just because of how recursive formulas work) and the error bounds are also rather clear. | SP:f6181b50c4f626e63db70d4adbb0f838f732fc0a |
Distribution Compression in Near-Linear Time | √ n points with Õ ( 1/ √ n ) distributional discrepancy to P. Unfortu- nately , these same algorithms suffer from quadratic or super-quadratic runtime in the sample size n. To address this deficiency , we introduce a simple metaprocedure—Compress++—for speeding up any input thinning algorithm while suffering at most a factor of four in error . When combined with the quadratictime kernel halving and kernel thinning algorithms of Dwivedi and Mackey ( 2021 ) , Compress++ delivers √ n points with O ( √ log n/n ) integration error and better-than-Monte-Carlo maximum mean discrepancy in O ( n log2 n ) time and O ( √ n log2 ( n ) ) space . Moreover , Compress++ enjoys the same near-linear runtime given any quadratic-time input and reduces the runtime of super-quadratic algorithms by a square-root factor . In our benchmarks with high-dimensional Monte Carlo samples and long-running Markov chains targeting challenging differential equation posteriors , Compress++ matches or nearly matches the accuracy of its input algorithm in orders of magnitude less time . 1 INTRODUCTION . Distribution compression—constructing a concise summary of a probability distribution—is at the heart of many learning and inference tasks . For example , in Monte Carlo integration and Bayesian inference , n representative points are sampled i.i.d . or from a Markov chain to approximate expectations and quantify uncertainty under an intractable ( posterior ) distribution ( Robert & Casella , 1999 ) . However , these standard sampling strategies represent a bottleneck in computationally-demanding settings due to their slow root-n Monte Carlo error rate . For instance , the Monte Carlo estimate Pnf , 1n ∑n i=1 f ( xi ) of an unknown expectation Pf , EX∼P [ f ( X ) ] based on n i.i.d . points has Θ ( n− 1 2 ) integration error |Pnf − Pf | , requiring n = 10000 points for 1 % relative error and n = 106 points for 0.1 % error . Such bloated sample representations preclude downstream applications with critically expensive function evaluations like computational cardiology , where a 1000-CPU-hour tissue or organ simulation is required for each sample point ( Niederer et al. , 2011 ; Augustin et al. , 2016 ; Strocchi et al. , 2020 ) , or expert data annotation which can be monetarily expensive Beaugnon et al . ( 2017 ) , and is sometimes coined as the human bottleneck of machine learning . To restore the feasibility of such critically expensive tasks , it is common to thin down the initial sequence of points to a produce a much smaller coreset . The standard thinning approach , select every t-th sample point ( Owen , 2017 ) , is simple to implement but often leads to an substantial increase in error : e.g. , standard thinning n points from a fast-mixing Markov chain yields Ω ( n− 1 4 ) error when n 1 2 points are returned . Recently , Dwivedi & Mackey ( 2021b ; a ) introduced a more effective alternative , kernel thinning , that provides near optimal Õ ( n− 12 ) error when compressing n points down to size n 1 2 . While practical for moderate sample sizes , the runtime of this algorithm scales quadratically with the input size n , making its execution prohibitive for very large input sizes . Our goal is to significantly improve the runtime of such compression algorithms while providing comparable error guarantees . Problem setup Given a sequence Sin of n input points summarizing a target distribution P , our aim is to identify a high quality coreset Sout of size √ n in time nearly linear in n. We measure coreset quality via its integration error |Pf −Poutf | , |Pf − 1|Sout| ∑ x∈Sout f ( x ) | for functions f in the reproducing kernel Hilbert space ( RKHS ) H induced by a given kernel k ( Berlinet & Thomas-Agnan , 2011 ) . We consider both single function error and kernel maximum mean discrepancy ( MMD , Gretton et al. , 2012 ) , the worst-case integration error over the unit RKHS norm ball : MMDk ( P , Pout ) , sup‖f‖k≤1|Pf − Poutf | = ‖Pk− Poutk‖k . Our contributions We introduce a new simple meta procedure—COMPRESS++—that significantly speeds up a generic thinning algorithm while simultaneously inheriting the error guarantees of its input up to a factor of four . A direct application of COMPRESS++ to kernel thinning improves its quadratic O ( n2 ) runtime to near linear O ( n log2 n ) time while maintaining the error guarantees up to a factor four . Since the Õ ( n− 12 ) KT MMD guarantees of Dwivedi & Mackey ( 2021b ) match the Ω ( n− 1 2 ) minimax lower bounds of Tolstikhin et al . ( 2017 ) ; Phillips & Tai ( 2020 ) up to factors of √ log ( n ) and constants depending on d , KT-COMPRESS++ also provides near-optimal MMD compression for a wide range of kernels and distributions P. Moreover , the practical gains from applying COMPRESS++ are substantial : KT thins 65 , 000 points in 10 dimensions in 20m , while KTCOMPRESS++ needs only 1.5m ; KT takes more than a day to thin 250 , 000 points in 100 dimensions , while KT-COMPRESS++ takes less than an hour ( a 32× speed-up ) . COMPRESS++ can also be directly combined with any thinning algorithm , even those that have suboptimal guarantees but often perform well in practice , like kernel herding ( Chen et al. , 2010 ) , support points ( Mak & Joseph , 2018 ) , Stein points ( MCMC ) ( Chen et al. , 2018 ; 2019 ) , and Stein thinning ( Riabiz et al. , 2020a ) , all of which run in Ω ( n2 ) time . As a demonstration , we combine COMPRESS++ with the popular kernel herding algorithm and observe 10-60× speed-ups . In all of our experiments , COMPRESS++ leads to minimal loss in accuracy and , surprisingly , even improves upon herding accuracy for lower-dimensional problems . Overview of COMPRESS++ To define COMPRESS++ , we first introduce COMPRESS : A simple and elegant recursive strategy takes in a halving algorithm and provides an intermediate thinned coreset in significantly faster runtime . COMPRESS divides the coreset into four parts , recursively applies COMPRESS to each , combines the output of each and halves the resulting coreset to get the output . Then COMPRESS++ with a thinning algorithm ALG is defined as follows : In stage one , use COMPRESS with ALG instantiated for 2-thinning , to obtain an intermediate coreset of size 2g √ n for a suitable parameter g , and then in stage two apply ALG directly to further thin it down to √ n points . In this manner , ALG is ever applied to coresets of size 2g √ n or smaller thereby improving the runtime . The parameter g is chosen based on the run-time and known guarantees of the underlying thinning algorithm , but a default choice of 4 can be shown to be optimal in several theoretical settings , and it provided competitive performance across all our experiments . Overall , with n input points and a thinning algorithm with runtime nα , COMPRESS++ uses a outputs √ n points in time Õ ( nα/2 ) with errors similar to those provided by the input thinning algorithm . Notation We write SALG for the coreset outputted by an algorithm ALG and extend our MMD definition for point sequences ( S1 , S2 ) with empirical distributions ( Q1 , Q2 ) via MMDk ( S1 , S2 ) , MMDk ( Q1 , Q2 ) and MMDk ( P , S1 ) , MMDk ( P , Q1 ) . We use a - b and a % b to mean a = O ( b ) and a = Ω ( b ) and use -k to denote that the underlying constants depend on k. Basic definitions We use H to denote an inner-product space endowed with the inner-product 〈· , ·〉H . We start with the definition of sub-gamma random variables ( Boucheron et al. , 2013 ) . Definition 1 ( f -Sub-Gamma ) A random variable X is said to be sub-gamma on the right with parameters ( σ2 , c ) denoted by Γ+ ( σ2 , c ) if for all 0 < λ < 1c , we have E [ exp ( λX ) ] ≤ exp ( λ2σ2 2 ( 1−cλ ) ) . For any f ∈ H , we say that a random variable G on H is f -sub-gamma on the right , denoted by Γf+ ( σ2 , c ) , if the random variable 〈f , G〉H ∈ Γ+ ( σ2 , c ) . As a reminder , we note that c = 0 yields a sub-Gaussian tails on the right . For X ∈ Γ+ ( σ2 , c ) , Boucheron et al . ( 2013 , Section 2.4 ) shows that for t ≥ 0 , and δ ∈ ( 0 , 1 ] , we have P [ X > √ 2σ2t+ ct ] ≤ e−t , or equivalently P [ X ≤ √ 2σ2 log ( 1δ ) + c log ( 1 δ ) ] ≥ 1− δ . ( 1 ) A property that we frequently use is that the sub-gamma property is suitably closed under multiplication and addition of sub-gamma random variables . A discussion is deferred to App . A . Definition 2 ( Thinning and halving algorithms ) Consider an algorithm ALG that takes as input a point sequence Sin of size n and returns a ( possibly random ) subsequence SALG ⊂ Sin of size nout . We say that ALG is αn-thinning if nout = n/αn whenever n/αn ∈ N. When αn = 2 , we say that ALG is a Halving algorithm , and when αn = √ n , we say ALG is root-thinning . For a thinning algorithm ALG , we associate a kernel discrepancy embedding that measures the approximation quality for the n input points Sin provided by the nout points SALG output by ALG : ψALG ( Sin ) , ∑ x∈Sin k ( x , · ) − n nout ∑ x∈SALG k ( x , · ) . ( 2 ) Letting PS denote the empirical distribution of S , the reproducing property of k yieldsthat PSinf − PSALGf = 1n 〈f , ψALG〉k , for any f ∈ H , and MMD ( Sin , SALG ) = 1 n‖ψALG‖k . ( 3 ) Next , we define the notion of a sub-gamma thinning algorithm via the object ψALG ( 2 ) . Definition 3 ( Sub-Gamma Thinning Algorithm ) Given the functions f ∈ H , σ : N→ R+ and c : N→ R+ , an αn-thinning algorithm ALG is called f -sub-gamma on the right with parameters ( σ2 , c ) , if ψALG ( Sin ) ∈ Γf+ ( σ2 ( n ) , c ( n ) ) conditioned on Sin of size n. When the algorithm is both Γf+ and Γ −f + -sub-gamma , ( 1 ) and ( 3 ) immmediately imply a high probability tail bound on the integration error |PSinf − PSALGf | . 2 COMPRESS . We now introduce our first meta procedure COMPRESS ( which is also a building block for COMPRESS++ ) . COMPRESS takes as input a halving algorithm HALVE and a oversampling factor g , and then given any input of size n , it outputs at thinned coreset of size 2g √ n. The algorithm is extremely simple to implement : first , divide the input sequence into four subsequences of size n4 ( in any manner the user chooses ) ; second , recursively call COMPRESS on each subsequence to produce four coresets of size 2g−1 √ n ; finally , call HALVE on the concatenation of those coresets to produce the final output of size 2g √ n. We denote this algorithm as COMPRESS ( HALVE , g ) , and describe it formally in Alg . 1 . COMPRESS can also be implemented in a streaming fashion but we defer the discussion to App . G. We remind that COMPRESS with g = 0 is a root thinning algorithm . Algorithm 1 : COMPRESS Input : Halving algorithm HALVE , oversampling factor g , point sequence Sin of size n if n = 4g then return Sin else Partition Sin into four subsets { Si } 4i=1 each of size n/4 for i = 1 , 2 , 3 , 4 do S̃i ← COMPRESS ( Si , HALVE , g ) // return coresets of size 2g · √ n 4 end S̃ ← CONCATENATE ( S̃1 , S̃2 , S̃3 , S̃4 ) // coreset of size 2 · 2g · √ n S ← HALVE ( k , S̃ ) // coreset of size 2g √ n return S end | The topic of the study is distribution-compression/thinning algorithms for Monte-Carlo estimation of functions in RKHS, i.e., given a set of $n$ points, such that uniform distribution on these n points approximates an underlying distribution (with respect to integration over functions in a RKHS) with a certain error, output a set of $m$ points $(m \ll n)$ that has a similar error. The paper proposes three reductions/meta-algorithms to speed up existing distribution-compression algorithms while maintaining the error guarantee of the original algorithms. The main result is that the proposed reduction improves the runtime of existing algorithms (with quadratic runtime or more) by a quadratic factor without increasing the corresponding error by a polylog factor. | SP:f6181b50c4f626e63db70d4adbb0f838f732fc0a |
Distribution Compression in Near-Linear Time | √ n points with Õ ( 1/ √ n ) distributional discrepancy to P. Unfortu- nately , these same algorithms suffer from quadratic or super-quadratic runtime in the sample size n. To address this deficiency , we introduce a simple metaprocedure—Compress++—for speeding up any input thinning algorithm while suffering at most a factor of four in error . When combined with the quadratictime kernel halving and kernel thinning algorithms of Dwivedi and Mackey ( 2021 ) , Compress++ delivers √ n points with O ( √ log n/n ) integration error and better-than-Monte-Carlo maximum mean discrepancy in O ( n log2 n ) time and O ( √ n log2 ( n ) ) space . Moreover , Compress++ enjoys the same near-linear runtime given any quadratic-time input and reduces the runtime of super-quadratic algorithms by a square-root factor . In our benchmarks with high-dimensional Monte Carlo samples and long-running Markov chains targeting challenging differential equation posteriors , Compress++ matches or nearly matches the accuracy of its input algorithm in orders of magnitude less time . 1 INTRODUCTION . Distribution compression—constructing a concise summary of a probability distribution—is at the heart of many learning and inference tasks . For example , in Monte Carlo integration and Bayesian inference , n representative points are sampled i.i.d . or from a Markov chain to approximate expectations and quantify uncertainty under an intractable ( posterior ) distribution ( Robert & Casella , 1999 ) . However , these standard sampling strategies represent a bottleneck in computationally-demanding settings due to their slow root-n Monte Carlo error rate . For instance , the Monte Carlo estimate Pnf , 1n ∑n i=1 f ( xi ) of an unknown expectation Pf , EX∼P [ f ( X ) ] based on n i.i.d . points has Θ ( n− 1 2 ) integration error |Pnf − Pf | , requiring n = 10000 points for 1 % relative error and n = 106 points for 0.1 % error . Such bloated sample representations preclude downstream applications with critically expensive function evaluations like computational cardiology , where a 1000-CPU-hour tissue or organ simulation is required for each sample point ( Niederer et al. , 2011 ; Augustin et al. , 2016 ; Strocchi et al. , 2020 ) , or expert data annotation which can be monetarily expensive Beaugnon et al . ( 2017 ) , and is sometimes coined as the human bottleneck of machine learning . To restore the feasibility of such critically expensive tasks , it is common to thin down the initial sequence of points to a produce a much smaller coreset . The standard thinning approach , select every t-th sample point ( Owen , 2017 ) , is simple to implement but often leads to an substantial increase in error : e.g. , standard thinning n points from a fast-mixing Markov chain yields Ω ( n− 1 4 ) error when n 1 2 points are returned . Recently , Dwivedi & Mackey ( 2021b ; a ) introduced a more effective alternative , kernel thinning , that provides near optimal Õ ( n− 12 ) error when compressing n points down to size n 1 2 . While practical for moderate sample sizes , the runtime of this algorithm scales quadratically with the input size n , making its execution prohibitive for very large input sizes . Our goal is to significantly improve the runtime of such compression algorithms while providing comparable error guarantees . Problem setup Given a sequence Sin of n input points summarizing a target distribution P , our aim is to identify a high quality coreset Sout of size √ n in time nearly linear in n. We measure coreset quality via its integration error |Pf −Poutf | , |Pf − 1|Sout| ∑ x∈Sout f ( x ) | for functions f in the reproducing kernel Hilbert space ( RKHS ) H induced by a given kernel k ( Berlinet & Thomas-Agnan , 2011 ) . We consider both single function error and kernel maximum mean discrepancy ( MMD , Gretton et al. , 2012 ) , the worst-case integration error over the unit RKHS norm ball : MMDk ( P , Pout ) , sup‖f‖k≤1|Pf − Poutf | = ‖Pk− Poutk‖k . Our contributions We introduce a new simple meta procedure—COMPRESS++—that significantly speeds up a generic thinning algorithm while simultaneously inheriting the error guarantees of its input up to a factor of four . A direct application of COMPRESS++ to kernel thinning improves its quadratic O ( n2 ) runtime to near linear O ( n log2 n ) time while maintaining the error guarantees up to a factor four . Since the Õ ( n− 12 ) KT MMD guarantees of Dwivedi & Mackey ( 2021b ) match the Ω ( n− 1 2 ) minimax lower bounds of Tolstikhin et al . ( 2017 ) ; Phillips & Tai ( 2020 ) up to factors of √ log ( n ) and constants depending on d , KT-COMPRESS++ also provides near-optimal MMD compression for a wide range of kernels and distributions P. Moreover , the practical gains from applying COMPRESS++ are substantial : KT thins 65 , 000 points in 10 dimensions in 20m , while KTCOMPRESS++ needs only 1.5m ; KT takes more than a day to thin 250 , 000 points in 100 dimensions , while KT-COMPRESS++ takes less than an hour ( a 32× speed-up ) . COMPRESS++ can also be directly combined with any thinning algorithm , even those that have suboptimal guarantees but often perform well in practice , like kernel herding ( Chen et al. , 2010 ) , support points ( Mak & Joseph , 2018 ) , Stein points ( MCMC ) ( Chen et al. , 2018 ; 2019 ) , and Stein thinning ( Riabiz et al. , 2020a ) , all of which run in Ω ( n2 ) time . As a demonstration , we combine COMPRESS++ with the popular kernel herding algorithm and observe 10-60× speed-ups . In all of our experiments , COMPRESS++ leads to minimal loss in accuracy and , surprisingly , even improves upon herding accuracy for lower-dimensional problems . Overview of COMPRESS++ To define COMPRESS++ , we first introduce COMPRESS : A simple and elegant recursive strategy takes in a halving algorithm and provides an intermediate thinned coreset in significantly faster runtime . COMPRESS divides the coreset into four parts , recursively applies COMPRESS to each , combines the output of each and halves the resulting coreset to get the output . Then COMPRESS++ with a thinning algorithm ALG is defined as follows : In stage one , use COMPRESS with ALG instantiated for 2-thinning , to obtain an intermediate coreset of size 2g √ n for a suitable parameter g , and then in stage two apply ALG directly to further thin it down to √ n points . In this manner , ALG is ever applied to coresets of size 2g √ n or smaller thereby improving the runtime . The parameter g is chosen based on the run-time and known guarantees of the underlying thinning algorithm , but a default choice of 4 can be shown to be optimal in several theoretical settings , and it provided competitive performance across all our experiments . Overall , with n input points and a thinning algorithm with runtime nα , COMPRESS++ uses a outputs √ n points in time Õ ( nα/2 ) with errors similar to those provided by the input thinning algorithm . Notation We write SALG for the coreset outputted by an algorithm ALG and extend our MMD definition for point sequences ( S1 , S2 ) with empirical distributions ( Q1 , Q2 ) via MMDk ( S1 , S2 ) , MMDk ( Q1 , Q2 ) and MMDk ( P , S1 ) , MMDk ( P , Q1 ) . We use a - b and a % b to mean a = O ( b ) and a = Ω ( b ) and use -k to denote that the underlying constants depend on k. Basic definitions We use H to denote an inner-product space endowed with the inner-product 〈· , ·〉H . We start with the definition of sub-gamma random variables ( Boucheron et al. , 2013 ) . Definition 1 ( f -Sub-Gamma ) A random variable X is said to be sub-gamma on the right with parameters ( σ2 , c ) denoted by Γ+ ( σ2 , c ) if for all 0 < λ < 1c , we have E [ exp ( λX ) ] ≤ exp ( λ2σ2 2 ( 1−cλ ) ) . For any f ∈ H , we say that a random variable G on H is f -sub-gamma on the right , denoted by Γf+ ( σ2 , c ) , if the random variable 〈f , G〉H ∈ Γ+ ( σ2 , c ) . As a reminder , we note that c = 0 yields a sub-Gaussian tails on the right . For X ∈ Γ+ ( σ2 , c ) , Boucheron et al . ( 2013 , Section 2.4 ) shows that for t ≥ 0 , and δ ∈ ( 0 , 1 ] , we have P [ X > √ 2σ2t+ ct ] ≤ e−t , or equivalently P [ X ≤ √ 2σ2 log ( 1δ ) + c log ( 1 δ ) ] ≥ 1− δ . ( 1 ) A property that we frequently use is that the sub-gamma property is suitably closed under multiplication and addition of sub-gamma random variables . A discussion is deferred to App . A . Definition 2 ( Thinning and halving algorithms ) Consider an algorithm ALG that takes as input a point sequence Sin of size n and returns a ( possibly random ) subsequence SALG ⊂ Sin of size nout . We say that ALG is αn-thinning if nout = n/αn whenever n/αn ∈ N. When αn = 2 , we say that ALG is a Halving algorithm , and when αn = √ n , we say ALG is root-thinning . For a thinning algorithm ALG , we associate a kernel discrepancy embedding that measures the approximation quality for the n input points Sin provided by the nout points SALG output by ALG : ψALG ( Sin ) , ∑ x∈Sin k ( x , · ) − n nout ∑ x∈SALG k ( x , · ) . ( 2 ) Letting PS denote the empirical distribution of S , the reproducing property of k yieldsthat PSinf − PSALGf = 1n 〈f , ψALG〉k , for any f ∈ H , and MMD ( Sin , SALG ) = 1 n‖ψALG‖k . ( 3 ) Next , we define the notion of a sub-gamma thinning algorithm via the object ψALG ( 2 ) . Definition 3 ( Sub-Gamma Thinning Algorithm ) Given the functions f ∈ H , σ : N→ R+ and c : N→ R+ , an αn-thinning algorithm ALG is called f -sub-gamma on the right with parameters ( σ2 , c ) , if ψALG ( Sin ) ∈ Γf+ ( σ2 ( n ) , c ( n ) ) conditioned on Sin of size n. When the algorithm is both Γf+ and Γ −f + -sub-gamma , ( 1 ) and ( 3 ) immmediately imply a high probability tail bound on the integration error |PSinf − PSALGf | . 2 COMPRESS . We now introduce our first meta procedure COMPRESS ( which is also a building block for COMPRESS++ ) . COMPRESS takes as input a halving algorithm HALVE and a oversampling factor g , and then given any input of size n , it outputs at thinned coreset of size 2g √ n. The algorithm is extremely simple to implement : first , divide the input sequence into four subsequences of size n4 ( in any manner the user chooses ) ; second , recursively call COMPRESS on each subsequence to produce four coresets of size 2g−1 √ n ; finally , call HALVE on the concatenation of those coresets to produce the final output of size 2g √ n. We denote this algorithm as COMPRESS ( HALVE , g ) , and describe it formally in Alg . 1 . COMPRESS can also be implemented in a streaming fashion but we defer the discussion to App . G. We remind that COMPRESS with g = 0 is a root thinning algorithm . Algorithm 1 : COMPRESS Input : Halving algorithm HALVE , oversampling factor g , point sequence Sin of size n if n = 4g then return Sin else Partition Sin into four subsets { Si } 4i=1 each of size n/4 for i = 1 , 2 , 3 , 4 do S̃i ← COMPRESS ( Si , HALVE , g ) // return coresets of size 2g · √ n 4 end S̃ ← CONCATENATE ( S̃1 , S̃2 , S̃3 , S̃4 ) // coreset of size 2 · 2g · √ n S ← HALVE ( k , S̃ ) // coreset of size 2g √ n return S end | The paper introduces meta algorithms "Compress" and "Compress++" that take existing thinning algorithm as a subroutine and improves on their runtime while incurring marginally more error. The algorithm "Compress" runs in recursive fashion: divides input set into four parts, runs Compress on each part independently, combines the resulting sets and halves the combined set using "HALVE" algorithm. As an example, the authors demonstrate that one can use KT-SPLIT as HALVE algorithm and get faster runtimes to compress albeit suffering extra error (upto log factors). Compress ensures that KT-SPLIT runs on sets of much smaller size. They build on Compress to obtain better error rates while sacrificing on runtimes (upto log factors). For this, they use the thinning algorithm on much smaller set obtained after running Compress. The runtimes are quite easy to derive based on recursion of algorithms. They use sub-gamma property to prove the error guarantees of the algorithms. They demonstrate the faster runtimes of their algorithms using high dimensional Monte Carlo samples. | SP:f6181b50c4f626e63db70d4adbb0f838f732fc0a |
Geometric Random Walk Graph Neural Networks via Implicit Layers | 1 INTRODUCTION . Recent years have witnessed an enormous growth in the amount of data represented as graphs . Indeed , graphs emerge naturally in several domains , including social networks , bioinformatics , and neuroscience , just to name a few . Besides the increase in the amount of graph-structured data , there is also a growing interest in applying machine learning techniques to data modeled as graphs . Among others , the graph classification and graph regression tasks have attracted a great deal of attention in the past years . These tasks have served as the fundamental building block within applications that deal with problems ranging from drug design Kearnes et al . ( 2016 ) to session-based recommendation Wu et al . ( 2019 ) . Graph Neural Networks ( GNNs ) provide a powerful tool for machine learning on graphs , So far , the field of GNNs has been largely dominated by message passing architectures . Indeed , most of them share the same basic idea , and can be reformulated into a single common framework , socalled message passing neural networks ( MPNNs ) Gilmer et al . ( 2017 ) . These models employ a message passing procedure to aggregate local information of vertices . For graph-related tasks , MPNNs usually apply some permutation invariant readout function to the vertex representations to produce a representation for the entire graph . The family of MPNNs has been heavily studied in the past few years , and there are now available very expressive models which have achieved state-ofthe-art results in several tasks Xu et al . ( 2019 ) ; Morris et al . ( 2019 ) . Although the family of MPNNs is perhaps the most successful story in the field of graph representation learning , there exist models that follow different design paradigms and do not fall into this family . An example of such a model is the recently proposed Random Walk Graph Neural Network ( RWNN ) Nikolentzos & Vazirgiannis ( 2020 ) . This model contains a number of trainable “ hidden graphs ” , and it compares the input graphs against these graphs using a random walk kernel which counts the number of common walks in two graphs . The emerging kernel values are fed into a fully-connected neural network which acts as the classifier or regressor . The employed random walk kernel is differentiable , and thus RWNN is end- to-end trainable . However , this kernel considers only random walks of a small length . Such local patterns may fail to capture the overall large-scale shape of the graphs , while several interesting properties of graphs depend on the graph ’ s global structure . Furthermore , increasing the length of the walks has a direct impact on the model ’ s computational complexity . In this paper , we propose a novel approach to tackle these challenges . Specifically , we propose a new architecture , called Geometric Random Walk Graph Neural Network ( GRWNN ) , that generalizes the RWNN model such that it can count common walks of infinite length in two graphs . The model contains a number of trainable “ hidden graphs ” , and it compares the input graphs against these graphs using the geometric random walk kernel . Thus , instead of walks of small length , the proposed model considers walks of infinite length . To compute the kernel , GRWNN uses a fixedpoint iteration approach . The kernel values are then passed on to a fully-connected neural network which produces the output . The proposed neural network is end-to-end trainable since we can directly differentiate through the fixed-point equations via implicit differentation , which leads to a very efficient implementation in terms of memory requirements . Hence , we can still update the “ hidden graphs ” during training with backpropagation . We compare the performance of the proposed model to state-of-the-art graph kernels and recently-proposed neural architectures on several graph classification datasets . Results show that in most cases , the GRWNN model matches or outperforms competing methods . Our main contributions are summarized as follows : • We propose a novel neural network model , Geometric Random Walk Graph Neural Network , which employs the geometric random walk kernel to produce graph representations . The model counts common walks of infinite length in the input graph and a set of randomly initialized “ hidden graphs ” . • We employ an efficient scheme to compute the random walk graph kernel using fixed-point iterations . We show that we can directly differentiate through the fixed-point equations via implicit differentation , which leads to an efficient implementation . • We evaluate the model ’ s performance on several standard graph classification datasets and show that it achieves results similar and in some cases superior to those obtained by recent GNNs and graph kernels . 2 RELATED WORK . Graph kernels have a long history in the field of graph representation learning Kriege et al . ( 2020 ) . A graph kernel is a kernel function between graphs , i. e. , a symmetric positive semidefinite function defined on the space of graphs . These methods generate implicitly ( or explicitly ) graph representations and enable the application of kernel methods such as the SVM classifier to graphs . Most graph kernels are instances of the R-convolution framework Haussler ( 1999 ) , and they compare substructures extracted from the graphs to each other . Such substructures include shortest paths Borgwardt & Kriegel ( 2005 ) , random walks Gärtner et al . ( 2003 ) ; Kashima et al . ( 2003 ) , small subgraphs Shervashidze et al . ( 2009 ) , and others . Our work is related to random walk kernels , i. e. , kernels that compare random walks to each other . The first such kernels were proposed by Gärtner et al . Gärtner et al . ( 2003 ) and by Kashima et al . Kashima et al . ( 2003 ) . The work of Kashima et al . was later refined by Mahé et al . Mahé et al . ( 2004 ) . Vishwanathan et al . Vishwanathan et al . ( 2010 ) and Kang et al . Kang et al . ( 2012 ) proposed new algorithms for efficiently computing random walk kernels . These algorithms improve the time complexity of kernel computation . Sugiyama and Borgwardt studied the problem of halting ( i. e. , longer walks are downweighted so much that the kernel value is completely dominated by the comparison of walks of length 1 ) that occurs in random walk kernels , and showed that its extent depends on properties of the graphs being compared Sugiyama & Borgwardt ( 2015 ) . Zhang et al . defined a different kernel which does not compare random walks to each other , but instead , compares the return probabilities of random walks Zhang et al . ( 2018b ) . Finally , Kalofolias et al . proposed a variant of the random walk kernel where structurally dissimilar vertices are not just down-weighed , but are not allowed to be visited during the simultaneous walk Kalofolias et al . ( 2021 ) . Although the first GNNs were proposed several years ago Sperduti & Starita ( 1997 ) ; Scarselli et al . ( 2009 ) ; Micheli ( 2009 ) , until recently , these models had attracted limited attention . In recent years , with the rise of deep learning , a lot of models started to emerge Bruna et al . ( 2014 ) ; Li et al . ( 2015 ) ; Duvenaud et al . ( 2015 ) ; Atwood & Towsley ( 2016 ) ; Defferrard et al . ( 2016 ) ; Lei et al . ( 2017 ) . Most models update the representation of each vertex by aggregating the feature vectors of its neighbors . This update procedure can be viewed as a form of message passing algorithm and thus , these models are known as message passing neural networks ( MPNNs ) Gilmer et al . ( 2017 ) . To compute a feature vector for the entire graph , MPNNs apply some permutation invariant readout function to all the vertices of the graph . The family of MPNNs has been heavily studied in the past few years , and there are now available several sophisticated models which can produce expressive graph representations Xu et al . ( 2019 ) ; Morris et al . ( 2019 ) ; Dehmamy et al . ( 2019 ) ; Morris et al . ( 2020 ) . Despite the general recent focus on MPNNs , some works have proposed architectures that are not variants of this family of models Niepert et al . ( 2016 ) ; Maron et al . ( 2019b ; a ) ; Nikolentzos & Vazirgiannis ( 2020 ) . The work closest to ours is the one reported in Nikolentzos & Vazirgiannis ( 2020 ) which presents the Random Walk Graph Neural Network ( RWNN ) model . In fact , in this paper , we generalize the RWNN model to compare random walks of infinite length in two graphs . Recently , another method that uses random walks to extract features which are then processed by a standard convolutional neural network was proposed Toenshoff et al . ( 2021 ) . However , the proposed approach decouples data representation from learning since random walks are sampled in a preprocessing stage . Our work is also related to implicit models which have been applied successfully to many problems de Avila Belbute-Peres et al . ( 2018 ) ; Chen et al . ( 2018 ) ; Amos et al . ( 2018 ) ; Bai et al . ( 2019 ) . The outputs of these models are determined implicitly by a solution of some underlying sub-problem . Implicit models have also been defined in the context of graph representation learning . For instance , Gu et al . proposed IGNN , a model that seeks the fixed-point of some equation which is equivalent to running an infinite number of message passing iterations Gu et al . ( 2020 ) . Thus , the final representation potentially contains information from all neighbors in the graph capturing long-range dependencies . Gallicchio and Micheli proposed a similar model which generates graph representations based on the fixed point of a recursive/dynamical system , but is actually only partially trained Gallicchio & Micheli ( 2020 ) . In contrast to these approaches whose objective is to apply a large ( or infinite ) number of message passing layers implicitly , in our setting , we employ a fixed-point iteration approach to compute the random walk kernel and then we directly differentiate through the fixed point equations via implicit differentation . 3 PRELIMINARIES . 3.1 NOTATION . Let [ n ] = { 1 , . . . , n } ⊂ N for n ≥ 1 . Let G = ( V , E ) be an undirected graph , where V is the vertex set and E is the edge set . We will denote by n the number of vertices and by m the number of edges . The adjacency matrix A ∈ Rn×n of a graph G is a symmetric ( typically sparse ) matrix used to encode edge information in the graph . The element of the ith row and jth column is equal to the weight of the edge between vertices vi and vj if such an edge exists , and 0 otherwise . The degree d ( v ) of a vertex v is equal to the sum of the weights of the edges that are adjacent to the vertex . For vertex-attributed graphs , every vertex in the graph is associated with a feature vector . We use X ∈ Rn×d to denote the vertex features where d is the feature dimensionality . The feature of a given vertex vi corresponds to the ith row of X . The direct ( tensor ) productG× = ( V× , E× ) of two graphsG = ( V , E ) andG′ = ( V ′ , E′ ) is defined as follows : V× = { ( v , v′ ) ∈ V × V ′ } E× = { ( ( v , v′ ) , ( u , u′ ) ) ∈ V× × V× | ( v , u ) ∈ E , and ( v′ , u′ ) ∈ E′ } We denote by A× the adjacency matrix ofG× , and denote by ∆× and d̄× the maximum and average of the vertex degrees of G× , respectively . Thus , d̄× = 1/n ∑ v∈V× d ( v ) . A walk in a graph is a sequence of vertices such that consecutive vertices are linked by an edge . Performing a random walk on the direct product G× of two graphs G and G′ is equivalent to performing a simultaneous random walk on the two graphs G and G′ . We use ⊗ to represent the Kronecker product , and use to represent elementwise multiplication between two matrices or vectors of the same dimension . For a p × q matrix V , vec ( V ) ∈ Rpq represents the vectorized form of V , obtained by stacking its columns . Let also vec−1 denote the inverse vectorization operator which transforms a vector into a matrix , i. e. , for a pq vector v , V = vec−1 ( v ) where V ∈ Rp×q ( see the appendix for the exact definition of the vec and vec−1 operators ) . | The paper discusses random walk graph kernels for graph neural network feature generation in other works. They work by generating features for graphs by comparing random walks present in each graph compared to a set of trainable hidden graphs. It is stated in this paper that an advantage of random walk kernels is that they are end to end differentiable, but current implementations have the weakness of only being efficient for walks of very short length. They introduce a way to efficiently extend walks to ‘infinite length’, theoretically making features generated by the kernel more informative. They compare their results with top GNNs and with other graph kernel approaches such as the mentioned random walk kernels on graph classification datasets. Out of 10 datasets they achieve best performance on 2 (1 of the 2 is tied for best), second best on 3, and third best on 1, It should be noted that many placements are well within the margin of error. | SP:2d57311afe29aca850105eedd61b705e17bbab9f |
Geometric Random Walk Graph Neural Networks via Implicit Layers | 1 INTRODUCTION . Recent years have witnessed an enormous growth in the amount of data represented as graphs . Indeed , graphs emerge naturally in several domains , including social networks , bioinformatics , and neuroscience , just to name a few . Besides the increase in the amount of graph-structured data , there is also a growing interest in applying machine learning techniques to data modeled as graphs . Among others , the graph classification and graph regression tasks have attracted a great deal of attention in the past years . These tasks have served as the fundamental building block within applications that deal with problems ranging from drug design Kearnes et al . ( 2016 ) to session-based recommendation Wu et al . ( 2019 ) . Graph Neural Networks ( GNNs ) provide a powerful tool for machine learning on graphs , So far , the field of GNNs has been largely dominated by message passing architectures . Indeed , most of them share the same basic idea , and can be reformulated into a single common framework , socalled message passing neural networks ( MPNNs ) Gilmer et al . ( 2017 ) . These models employ a message passing procedure to aggregate local information of vertices . For graph-related tasks , MPNNs usually apply some permutation invariant readout function to the vertex representations to produce a representation for the entire graph . The family of MPNNs has been heavily studied in the past few years , and there are now available very expressive models which have achieved state-ofthe-art results in several tasks Xu et al . ( 2019 ) ; Morris et al . ( 2019 ) . Although the family of MPNNs is perhaps the most successful story in the field of graph representation learning , there exist models that follow different design paradigms and do not fall into this family . An example of such a model is the recently proposed Random Walk Graph Neural Network ( RWNN ) Nikolentzos & Vazirgiannis ( 2020 ) . This model contains a number of trainable “ hidden graphs ” , and it compares the input graphs against these graphs using a random walk kernel which counts the number of common walks in two graphs . The emerging kernel values are fed into a fully-connected neural network which acts as the classifier or regressor . The employed random walk kernel is differentiable , and thus RWNN is end- to-end trainable . However , this kernel considers only random walks of a small length . Such local patterns may fail to capture the overall large-scale shape of the graphs , while several interesting properties of graphs depend on the graph ’ s global structure . Furthermore , increasing the length of the walks has a direct impact on the model ’ s computational complexity . In this paper , we propose a novel approach to tackle these challenges . Specifically , we propose a new architecture , called Geometric Random Walk Graph Neural Network ( GRWNN ) , that generalizes the RWNN model such that it can count common walks of infinite length in two graphs . The model contains a number of trainable “ hidden graphs ” , and it compares the input graphs against these graphs using the geometric random walk kernel . Thus , instead of walks of small length , the proposed model considers walks of infinite length . To compute the kernel , GRWNN uses a fixedpoint iteration approach . The kernel values are then passed on to a fully-connected neural network which produces the output . The proposed neural network is end-to-end trainable since we can directly differentiate through the fixed-point equations via implicit differentation , which leads to a very efficient implementation in terms of memory requirements . Hence , we can still update the “ hidden graphs ” during training with backpropagation . We compare the performance of the proposed model to state-of-the-art graph kernels and recently-proposed neural architectures on several graph classification datasets . Results show that in most cases , the GRWNN model matches or outperforms competing methods . Our main contributions are summarized as follows : • We propose a novel neural network model , Geometric Random Walk Graph Neural Network , which employs the geometric random walk kernel to produce graph representations . The model counts common walks of infinite length in the input graph and a set of randomly initialized “ hidden graphs ” . • We employ an efficient scheme to compute the random walk graph kernel using fixed-point iterations . We show that we can directly differentiate through the fixed-point equations via implicit differentation , which leads to an efficient implementation . • We evaluate the model ’ s performance on several standard graph classification datasets and show that it achieves results similar and in some cases superior to those obtained by recent GNNs and graph kernels . 2 RELATED WORK . Graph kernels have a long history in the field of graph representation learning Kriege et al . ( 2020 ) . A graph kernel is a kernel function between graphs , i. e. , a symmetric positive semidefinite function defined on the space of graphs . These methods generate implicitly ( or explicitly ) graph representations and enable the application of kernel methods such as the SVM classifier to graphs . Most graph kernels are instances of the R-convolution framework Haussler ( 1999 ) , and they compare substructures extracted from the graphs to each other . Such substructures include shortest paths Borgwardt & Kriegel ( 2005 ) , random walks Gärtner et al . ( 2003 ) ; Kashima et al . ( 2003 ) , small subgraphs Shervashidze et al . ( 2009 ) , and others . Our work is related to random walk kernels , i. e. , kernels that compare random walks to each other . The first such kernels were proposed by Gärtner et al . Gärtner et al . ( 2003 ) and by Kashima et al . Kashima et al . ( 2003 ) . The work of Kashima et al . was later refined by Mahé et al . Mahé et al . ( 2004 ) . Vishwanathan et al . Vishwanathan et al . ( 2010 ) and Kang et al . Kang et al . ( 2012 ) proposed new algorithms for efficiently computing random walk kernels . These algorithms improve the time complexity of kernel computation . Sugiyama and Borgwardt studied the problem of halting ( i. e. , longer walks are downweighted so much that the kernel value is completely dominated by the comparison of walks of length 1 ) that occurs in random walk kernels , and showed that its extent depends on properties of the graphs being compared Sugiyama & Borgwardt ( 2015 ) . Zhang et al . defined a different kernel which does not compare random walks to each other , but instead , compares the return probabilities of random walks Zhang et al . ( 2018b ) . Finally , Kalofolias et al . proposed a variant of the random walk kernel where structurally dissimilar vertices are not just down-weighed , but are not allowed to be visited during the simultaneous walk Kalofolias et al . ( 2021 ) . Although the first GNNs were proposed several years ago Sperduti & Starita ( 1997 ) ; Scarselli et al . ( 2009 ) ; Micheli ( 2009 ) , until recently , these models had attracted limited attention . In recent years , with the rise of deep learning , a lot of models started to emerge Bruna et al . ( 2014 ) ; Li et al . ( 2015 ) ; Duvenaud et al . ( 2015 ) ; Atwood & Towsley ( 2016 ) ; Defferrard et al . ( 2016 ) ; Lei et al . ( 2017 ) . Most models update the representation of each vertex by aggregating the feature vectors of its neighbors . This update procedure can be viewed as a form of message passing algorithm and thus , these models are known as message passing neural networks ( MPNNs ) Gilmer et al . ( 2017 ) . To compute a feature vector for the entire graph , MPNNs apply some permutation invariant readout function to all the vertices of the graph . The family of MPNNs has been heavily studied in the past few years , and there are now available several sophisticated models which can produce expressive graph representations Xu et al . ( 2019 ) ; Morris et al . ( 2019 ) ; Dehmamy et al . ( 2019 ) ; Morris et al . ( 2020 ) . Despite the general recent focus on MPNNs , some works have proposed architectures that are not variants of this family of models Niepert et al . ( 2016 ) ; Maron et al . ( 2019b ; a ) ; Nikolentzos & Vazirgiannis ( 2020 ) . The work closest to ours is the one reported in Nikolentzos & Vazirgiannis ( 2020 ) which presents the Random Walk Graph Neural Network ( RWNN ) model . In fact , in this paper , we generalize the RWNN model to compare random walks of infinite length in two graphs . Recently , another method that uses random walks to extract features which are then processed by a standard convolutional neural network was proposed Toenshoff et al . ( 2021 ) . However , the proposed approach decouples data representation from learning since random walks are sampled in a preprocessing stage . Our work is also related to implicit models which have been applied successfully to many problems de Avila Belbute-Peres et al . ( 2018 ) ; Chen et al . ( 2018 ) ; Amos et al . ( 2018 ) ; Bai et al . ( 2019 ) . The outputs of these models are determined implicitly by a solution of some underlying sub-problem . Implicit models have also been defined in the context of graph representation learning . For instance , Gu et al . proposed IGNN , a model that seeks the fixed-point of some equation which is equivalent to running an infinite number of message passing iterations Gu et al . ( 2020 ) . Thus , the final representation potentially contains information from all neighbors in the graph capturing long-range dependencies . Gallicchio and Micheli proposed a similar model which generates graph representations based on the fixed point of a recursive/dynamical system , but is actually only partially trained Gallicchio & Micheli ( 2020 ) . In contrast to these approaches whose objective is to apply a large ( or infinite ) number of message passing layers implicitly , in our setting , we employ a fixed-point iteration approach to compute the random walk kernel and then we directly differentiate through the fixed point equations via implicit differentation . 3 PRELIMINARIES . 3.1 NOTATION . Let [ n ] = { 1 , . . . , n } ⊂ N for n ≥ 1 . Let G = ( V , E ) be an undirected graph , where V is the vertex set and E is the edge set . We will denote by n the number of vertices and by m the number of edges . The adjacency matrix A ∈ Rn×n of a graph G is a symmetric ( typically sparse ) matrix used to encode edge information in the graph . The element of the ith row and jth column is equal to the weight of the edge between vertices vi and vj if such an edge exists , and 0 otherwise . The degree d ( v ) of a vertex v is equal to the sum of the weights of the edges that are adjacent to the vertex . For vertex-attributed graphs , every vertex in the graph is associated with a feature vector . We use X ∈ Rn×d to denote the vertex features where d is the feature dimensionality . The feature of a given vertex vi corresponds to the ith row of X . The direct ( tensor ) productG× = ( V× , E× ) of two graphsG = ( V , E ) andG′ = ( V ′ , E′ ) is defined as follows : V× = { ( v , v′ ) ∈ V × V ′ } E× = { ( ( v , v′ ) , ( u , u′ ) ) ∈ V× × V× | ( v , u ) ∈ E , and ( v′ , u′ ) ∈ E′ } We denote by A× the adjacency matrix ofG× , and denote by ∆× and d̄× the maximum and average of the vertex degrees of G× , respectively . Thus , d̄× = 1/n ∑ v∈V× d ( v ) . A walk in a graph is a sequence of vertices such that consecutive vertices are linked by an edge . Performing a random walk on the direct product G× of two graphs G and G′ is equivalent to performing a simultaneous random walk on the two graphs G and G′ . We use ⊗ to represent the Kronecker product , and use to represent elementwise multiplication between two matrices or vectors of the same dimension . For a p × q matrix V , vec ( V ) ∈ Rpq represents the vectorized form of V , obtained by stacking its columns . Let also vec−1 denote the inverse vectorization operator which transforms a vector into a matrix , i. e. , for a pq vector v , V = vec−1 ( v ) where V ∈ Rp×q ( see the appendix for the exact definition of the vec and vec−1 operators ) . | The paper deals with learning with graphs, specifically graph-level classification. Given a graph G, the main idea of the paper is to compare the graph G to a set of N graphs using the (infinite-length) random walk kernel (Vishwanathan et al. 2010), resulting in a vector with N components. The resulting vector is fed into an MLP performing the classification tasks. Crucially, the N graphs are learned using end-to-end learning. Further, the authors show how to compute the RW kernel efficiently, building on previous work (Vishwanathan et al. 2010), and how to do differentiate through the architecture efficiently, building on (Bai et al. 2016). The paper can be seen as an extension of the RWNN model (Nikolentzos et al. 2020), which uses a finite-length random kernel. The methodology is complemented with an experimental study showing promising performance on standard benchmarks datasets. | SP:2d57311afe29aca850105eedd61b705e17bbab9f |
Geometric Random Walk Graph Neural Networks via Implicit Layers | 1 INTRODUCTION . Recent years have witnessed an enormous growth in the amount of data represented as graphs . Indeed , graphs emerge naturally in several domains , including social networks , bioinformatics , and neuroscience , just to name a few . Besides the increase in the amount of graph-structured data , there is also a growing interest in applying machine learning techniques to data modeled as graphs . Among others , the graph classification and graph regression tasks have attracted a great deal of attention in the past years . These tasks have served as the fundamental building block within applications that deal with problems ranging from drug design Kearnes et al . ( 2016 ) to session-based recommendation Wu et al . ( 2019 ) . Graph Neural Networks ( GNNs ) provide a powerful tool for machine learning on graphs , So far , the field of GNNs has been largely dominated by message passing architectures . Indeed , most of them share the same basic idea , and can be reformulated into a single common framework , socalled message passing neural networks ( MPNNs ) Gilmer et al . ( 2017 ) . These models employ a message passing procedure to aggregate local information of vertices . For graph-related tasks , MPNNs usually apply some permutation invariant readout function to the vertex representations to produce a representation for the entire graph . The family of MPNNs has been heavily studied in the past few years , and there are now available very expressive models which have achieved state-ofthe-art results in several tasks Xu et al . ( 2019 ) ; Morris et al . ( 2019 ) . Although the family of MPNNs is perhaps the most successful story in the field of graph representation learning , there exist models that follow different design paradigms and do not fall into this family . An example of such a model is the recently proposed Random Walk Graph Neural Network ( RWNN ) Nikolentzos & Vazirgiannis ( 2020 ) . This model contains a number of trainable “ hidden graphs ” , and it compares the input graphs against these graphs using a random walk kernel which counts the number of common walks in two graphs . The emerging kernel values are fed into a fully-connected neural network which acts as the classifier or regressor . The employed random walk kernel is differentiable , and thus RWNN is end- to-end trainable . However , this kernel considers only random walks of a small length . Such local patterns may fail to capture the overall large-scale shape of the graphs , while several interesting properties of graphs depend on the graph ’ s global structure . Furthermore , increasing the length of the walks has a direct impact on the model ’ s computational complexity . In this paper , we propose a novel approach to tackle these challenges . Specifically , we propose a new architecture , called Geometric Random Walk Graph Neural Network ( GRWNN ) , that generalizes the RWNN model such that it can count common walks of infinite length in two graphs . The model contains a number of trainable “ hidden graphs ” , and it compares the input graphs against these graphs using the geometric random walk kernel . Thus , instead of walks of small length , the proposed model considers walks of infinite length . To compute the kernel , GRWNN uses a fixedpoint iteration approach . The kernel values are then passed on to a fully-connected neural network which produces the output . The proposed neural network is end-to-end trainable since we can directly differentiate through the fixed-point equations via implicit differentation , which leads to a very efficient implementation in terms of memory requirements . Hence , we can still update the “ hidden graphs ” during training with backpropagation . We compare the performance of the proposed model to state-of-the-art graph kernels and recently-proposed neural architectures on several graph classification datasets . Results show that in most cases , the GRWNN model matches or outperforms competing methods . Our main contributions are summarized as follows : • We propose a novel neural network model , Geometric Random Walk Graph Neural Network , which employs the geometric random walk kernel to produce graph representations . The model counts common walks of infinite length in the input graph and a set of randomly initialized “ hidden graphs ” . • We employ an efficient scheme to compute the random walk graph kernel using fixed-point iterations . We show that we can directly differentiate through the fixed-point equations via implicit differentation , which leads to an efficient implementation . • We evaluate the model ’ s performance on several standard graph classification datasets and show that it achieves results similar and in some cases superior to those obtained by recent GNNs and graph kernels . 2 RELATED WORK . Graph kernels have a long history in the field of graph representation learning Kriege et al . ( 2020 ) . A graph kernel is a kernel function between graphs , i. e. , a symmetric positive semidefinite function defined on the space of graphs . These methods generate implicitly ( or explicitly ) graph representations and enable the application of kernel methods such as the SVM classifier to graphs . Most graph kernels are instances of the R-convolution framework Haussler ( 1999 ) , and they compare substructures extracted from the graphs to each other . Such substructures include shortest paths Borgwardt & Kriegel ( 2005 ) , random walks Gärtner et al . ( 2003 ) ; Kashima et al . ( 2003 ) , small subgraphs Shervashidze et al . ( 2009 ) , and others . Our work is related to random walk kernels , i. e. , kernels that compare random walks to each other . The first such kernels were proposed by Gärtner et al . Gärtner et al . ( 2003 ) and by Kashima et al . Kashima et al . ( 2003 ) . The work of Kashima et al . was later refined by Mahé et al . Mahé et al . ( 2004 ) . Vishwanathan et al . Vishwanathan et al . ( 2010 ) and Kang et al . Kang et al . ( 2012 ) proposed new algorithms for efficiently computing random walk kernels . These algorithms improve the time complexity of kernel computation . Sugiyama and Borgwardt studied the problem of halting ( i. e. , longer walks are downweighted so much that the kernel value is completely dominated by the comparison of walks of length 1 ) that occurs in random walk kernels , and showed that its extent depends on properties of the graphs being compared Sugiyama & Borgwardt ( 2015 ) . Zhang et al . defined a different kernel which does not compare random walks to each other , but instead , compares the return probabilities of random walks Zhang et al . ( 2018b ) . Finally , Kalofolias et al . proposed a variant of the random walk kernel where structurally dissimilar vertices are not just down-weighed , but are not allowed to be visited during the simultaneous walk Kalofolias et al . ( 2021 ) . Although the first GNNs were proposed several years ago Sperduti & Starita ( 1997 ) ; Scarselli et al . ( 2009 ) ; Micheli ( 2009 ) , until recently , these models had attracted limited attention . In recent years , with the rise of deep learning , a lot of models started to emerge Bruna et al . ( 2014 ) ; Li et al . ( 2015 ) ; Duvenaud et al . ( 2015 ) ; Atwood & Towsley ( 2016 ) ; Defferrard et al . ( 2016 ) ; Lei et al . ( 2017 ) . Most models update the representation of each vertex by aggregating the feature vectors of its neighbors . This update procedure can be viewed as a form of message passing algorithm and thus , these models are known as message passing neural networks ( MPNNs ) Gilmer et al . ( 2017 ) . To compute a feature vector for the entire graph , MPNNs apply some permutation invariant readout function to all the vertices of the graph . The family of MPNNs has been heavily studied in the past few years , and there are now available several sophisticated models which can produce expressive graph representations Xu et al . ( 2019 ) ; Morris et al . ( 2019 ) ; Dehmamy et al . ( 2019 ) ; Morris et al . ( 2020 ) . Despite the general recent focus on MPNNs , some works have proposed architectures that are not variants of this family of models Niepert et al . ( 2016 ) ; Maron et al . ( 2019b ; a ) ; Nikolentzos & Vazirgiannis ( 2020 ) . The work closest to ours is the one reported in Nikolentzos & Vazirgiannis ( 2020 ) which presents the Random Walk Graph Neural Network ( RWNN ) model . In fact , in this paper , we generalize the RWNN model to compare random walks of infinite length in two graphs . Recently , another method that uses random walks to extract features which are then processed by a standard convolutional neural network was proposed Toenshoff et al . ( 2021 ) . However , the proposed approach decouples data representation from learning since random walks are sampled in a preprocessing stage . Our work is also related to implicit models which have been applied successfully to many problems de Avila Belbute-Peres et al . ( 2018 ) ; Chen et al . ( 2018 ) ; Amos et al . ( 2018 ) ; Bai et al . ( 2019 ) . The outputs of these models are determined implicitly by a solution of some underlying sub-problem . Implicit models have also been defined in the context of graph representation learning . For instance , Gu et al . proposed IGNN , a model that seeks the fixed-point of some equation which is equivalent to running an infinite number of message passing iterations Gu et al . ( 2020 ) . Thus , the final representation potentially contains information from all neighbors in the graph capturing long-range dependencies . Gallicchio and Micheli proposed a similar model which generates graph representations based on the fixed point of a recursive/dynamical system , but is actually only partially trained Gallicchio & Micheli ( 2020 ) . In contrast to these approaches whose objective is to apply a large ( or infinite ) number of message passing layers implicitly , in our setting , we employ a fixed-point iteration approach to compute the random walk kernel and then we directly differentiate through the fixed point equations via implicit differentation . 3 PRELIMINARIES . 3.1 NOTATION . Let [ n ] = { 1 , . . . , n } ⊂ N for n ≥ 1 . Let G = ( V , E ) be an undirected graph , where V is the vertex set and E is the edge set . We will denote by n the number of vertices and by m the number of edges . The adjacency matrix A ∈ Rn×n of a graph G is a symmetric ( typically sparse ) matrix used to encode edge information in the graph . The element of the ith row and jth column is equal to the weight of the edge between vertices vi and vj if such an edge exists , and 0 otherwise . The degree d ( v ) of a vertex v is equal to the sum of the weights of the edges that are adjacent to the vertex . For vertex-attributed graphs , every vertex in the graph is associated with a feature vector . We use X ∈ Rn×d to denote the vertex features where d is the feature dimensionality . The feature of a given vertex vi corresponds to the ith row of X . The direct ( tensor ) productG× = ( V× , E× ) of two graphsG = ( V , E ) andG′ = ( V ′ , E′ ) is defined as follows : V× = { ( v , v′ ) ∈ V × V ′ } E× = { ( ( v , v′ ) , ( u , u′ ) ) ∈ V× × V× | ( v , u ) ∈ E , and ( v′ , u′ ) ∈ E′ } We denote by A× the adjacency matrix ofG× , and denote by ∆× and d̄× the maximum and average of the vertex degrees of G× , respectively . Thus , d̄× = 1/n ∑ v∈V× d ( v ) . A walk in a graph is a sequence of vertices such that consecutive vertices are linked by an edge . Performing a random walk on the direct product G× of two graphs G and G′ is equivalent to performing a simultaneous random walk on the two graphs G and G′ . We use ⊗ to represent the Kronecker product , and use to represent elementwise multiplication between two matrices or vectors of the same dimension . For a p × q matrix V , vec ( V ) ∈ Rpq represents the vectorized form of V , obtained by stacking its columns . Let also vec−1 denote the inverse vectorization operator which transforms a vector into a matrix , i. e. , for a pq vector v , V = vec−1 ( v ) where V ∈ Rp×q ( see the appendix for the exact definition of the vec and vec−1 operators ) . | In this paper, the authors proposed an interesting geometric random walk neural network for graph representation. The proposed method can be viewed as an improvement of the random walk neural network in [Nikolentzos & Vazirgiannis, 2020], which learns graph representations by modeling the geometric series of the “product” between observed graphs and some hidden and learnable graphs. As a result, the graph representations are learned by finding fixed points of linear systems. | SP:2d57311afe29aca850105eedd61b705e17bbab9f |
Hybrid Random Features | 1 INTRODUCTION & RELATED WORK . Consider the softmax and Gaussian kernel functions K : Rd×d → R defined as follows : SM ( x , y ) def = exp ( x⊤y ) , Kgauss ( x , y ) def = exp ( −∥x− y∥ 2 2 2 ) . ( 1 ) These two are prominent examples of functions used in the so-called kernel methods ( Gretton et al. , 2005 ; Zhang et al. , 2018 ) and beyond , i.e . in softmax-sampling ( Blanc & Rendle , 2018 ) . Random features ( RFs , Rahimi & Recht , 2007 ; Liu et al. , 2020 ; Peng et al. , 2021 ) yield a powerful mechanism for linearizing and consequently scaling up kernel methods with dot-product kernel decompositions disentangling x from y in the formulae for kernel value K ( x , y ) ≈ ϕ ( x ) ⊤ϕ ( y ) via data-agnostic probabilistic ( random feature ) maps ϕ : Rd → Rm . The tight relationship between softmax and Gaussian kernels given by the transformation Kgauss ( x , y ) = exp ( −∥x∥ 2 2 2 ) SM ( x , y ) exp ( − ∥y∥22 2 ) provides a mapping of any random feature vector ϕSM ( u ) for the softmax kernel to the corresponding one ϕgauss ( u ) = exp ( −∥u∥ 2 2 2 ) ϕSM ( u ) for the Gaussian kernel , thus we will focus on the former kernel . The classic random feature map mechanism ϕtrigm for the softmax kernel , obtained from Bochner ’ s Theorem applied to the Gaussian kernel ( Rahimi & Recht , 2007 ) , is of the form : ϕtrigm ( u ) = 1√ m exp ( ∥u∥2 2 ) ( sin ( ω⊤1 u ) , ... , sin ( ω ⊤ mu ) , cos ( ω ⊤ 1 u ) , ... , cos ( ω ⊤ mu ) ) ⊤ , ( 2 ) where m stands for the number of random features and ω1 , ... , ωm iid∼ N ( 0 , Id ) . The above ( common in most downstream use-cases ) method for linearizing softmax/Gaussian kernels was recently shown to fail in some of the new impactful applications of scalable kernel methods such as implicit-attention Transformer architectures called Performers ( Choromanski et al. , 2021b ) . We denote by MSE the mean squared error of the estimator ( i.e . its variance since all estimators considered in this paper are unbiased ) . The above mechanism struggles to accurately approximate close-to-zero kernel values , as characterized by the particularly large MSE in that region . This is a crucial problem since most of the entries of the attention matrices in Transformers ’ models are very ∗Equal Contribution , Correspondence to kchoro @ google.com . †Authorship in alphabetical order ‡Independent researcher . Work done during postdoc at Yale University small and the approximators need to be particularly accurate there . Otherwise the renormalizers computed to make attention matrices row-stochastic ( standard attention normalization procedure in Transformers ) would be estimated imprecisely , potentially even by negative values . The solution to this problem was presented in FAVOR+ mechanism ( Choromanski et al. , 2021b ) , where a new positive random feature map for unbiased softmax kernel estimation was applied : ϕ++m ( u ) = 1√ 2m exp ( −∥u∥ 2 2 ) ( exp ( ω⊤1 u ) , ... , exp ( ω ⊤ mu ) , exp ( −ω⊤1 u ) , ... , exp ( −ω⊤mu ) ) ⊤ . ( 3 ) Even though , very accurate in estimating small softmax kernel values ( which turns out to be crucial in making RFs work for Transformers training ) , this mechanism is characterized by larger MSE for large kernel values . In several applications of softmax kernels ( in particular Transformers , where attention matrices typically admit sparse combinatorial structure with relatively few but critical large entries and several close-to-zero ones or softmax sampling ) the algorithm needs to process simultaneously very small and large kernel values . The following natural questions arise : Is it possible to get the best from both the mechanisms to obtain RFs-based estimators particularly accurate for both very small and large softmax kernel values ? Furthermore , can those estimators be designed to have low variance in more generally pre-defined regions ? . We give affirmative answers to both of the above questions by constructing a new class of random feature maps techniques called hybrid random features or HRFs . Theoretical methods used by us to develop HRFs : ( a ) provide a unifying perspective , where trigonometric random features from Bochner ’ s Theorem and novel mechanisms proposed in ( Choromanski et al. , 2021b ) are just special corollaries of the more general result from complex analysis , ( b ) integrate in the original way several other powerful probabilistic techniques such as Goemans-Willimson method ( Goemans & Williamson , 2004 ) and random features for the compositional kernels ( Daniely et al. , 2017 ) . We provide detailed theoretical analysis of HRFs , showing in particular that they provide strictly more accurate worst-case softmax kernel estimation than previous algorithms and lead to computational gains . We also conduct their thorough empirical evaluation on tasks ranging from pointwise kernel estimation to downstream problems involving training Transformer-models or even end-toend robotic controller-stacks including attention-based architectures . Related Work : The literature on different random feature map mechanisms for Gaussian ( and thus also softmax ) kernel estimation is voluminous . Most focus has been put on reducing the variance of trigonometric random features from ( Rahimi & Recht , 2007 ) via various Quasi Monte Carlo ( QMC ) methods , where directions and/or lengths of Gaussian vectors used to produce features are correlated , often through geometric conditions such as orthogonality ( Choromanski et al. , 2017 ; Rowland et al. , 2018 ; Yu et al. , 2016 ; Choromanski et al. , 2019 ; Choromanski & Sindhwani , 2016 ) . Our HRFs do not compete with those techniques ( and can be in fact easily combined with them ) since rather than focusing on improving sampling mechanism for a given approximation algorithm , they provide a completely new algorithm . The new application of random features for softmax kernel in Transformers proposed in ( Choromanski et al. , 2020 ; 2021b ) led to fruitful research on the extensions and limitations of these methods . Schlag et al . ( 2021 ) replaced random features by sparse deterministic constructions ( no longer approximating softmax kernel ) . Luo et al . ( 2021 ) observed that combining L2-normalization of queries and keys for variance reduction of softmax kernel estimation with FFT-based implementations of relative position encoding and FAVOR+ mechanism from Performers helps in training . Trigonometric random features were applied for softmax sampling in ( Rawat et al. , 2019 ) . Several other techniques such as Nyström method ( Yang et al. , 2012 ; Williams & Seeger , 2000 ; Rudi et al. , 2015 ) were proposed to construct data-dependent feature representations . Even though , as we show in Sec . 2.4 , certain instantiations of the HRF mechanism clearly benefit from some data analysis , our central goal remains an unbiased estimation of the softmax/Gaussian kernels which is no longer the case for those other techniques . 2 HYBRID RANDOM FEATURES . 2.1 PRELIMINARIES . Whenever we do not say explicitly otherwise , all presented lemmas and theorems are new . We start with the following basic definitions and results . Definition 2.1 ( Kernel with a Random Feature Map Representation ) . We say that a kernel function K : Rd × Rd → R admits a random feature ( RF ) map representation if it can be written as K ( x , y ) = Eω∼Ω [ l∑ i=1 ξi ( x , ω ) ξi ( y , ω ) ] , ( 4 ) for some ξi : Rd × Rd → R , and where ω is sampled from some probabilistic distribution Ω ∈ P ( Rd ) . The corresponding random feature map , for a given m ∈ N , is defined as ϕm ( u ) = 1√ m ϕ1m ( u ) ⋆ ... ⋆ ϕ l m ( u ) ∈ Rml , ( 5 ) where ϕim = ( ξi ( u , ω1 ) , ... , ξi ( u , ωm ) ) ⊤ , ⋆ stands for vertical concatenation , and ω1 , ... , ωm iid∼ Ω . Random feature maps can be used to unbiasedly approximate corresponding kernels , as follows : K̂ ( x , y ) = ϕm ( x ) ⊤ϕm ( y ) . ( 6 ) Using the above notation , a trigonometric random feature ϕtrigm from Equation 2 can be encoded as applying l = 2 , ξ1 ( u , ω ) = sin ( ω⊤u ) , ξ2 ( u , ω ) = cos ( ω⊤u ) . Similarly , positive random features can be encoded as taking l = 2 and ξ1 ( u , ω ) = 1√2 exp ( ω ⊤u ) , ξ2 ( u , ω ) = 1√2 exp ( −ω ⊤u ) . The following result from ( Choromanski et al. , 2021b ) shows that the mean squared error ( MSE ) of the trigonometric estimator is small for large softmax kernel values and large for small softmax kernel values , whereas an estimator applying positive random features behaves in the opposite way . Denote by ŜM trig m ( x , y ) an estimator of SM ( x , y ) for x , y ∈ Rd using trigonometric RFs and ω1 , ... , ωm iid∼ N ( 0 , Id ) . Denote by ŜM ++ m ( x , y ) its analogue using positive RFs . We have : Lemma 2.2 ( positive versus trigonometric RFs ) . Take ∆ = x − y , z = x + y , f1 ( u ) = ( 2m ) −1 exp ( u2 ) SM−2 ( x , y ) , f2 ( u ) = ( 2m ) −1 exp ( u2 ) SM2 ( x , y ) , f3 ( u ) = ( 1 − exp ( −u2 ) ) 2 . The MSEs of these estimators are : MSE ( ŜM trig m ( x , y ) ) = f1 ( ∥z∥2 ) f3 ( ∥∆∥2 ) , MSE ( ŜM ++ m ( x , y ) ) = f2 ( ∥z∥2 ) f3 ( ∥z∥2 ) . ( 7 ) 2.2 THE ALGORITHM . We are ready to present the mechanism of Hybrid Random Features ( HRFs ) . Denote by E = ( ŜM k ( x , y ) ) p+1k=1 a list of estimators of SM ( x , y ) ( the so-called base estimators ) and by Λ = ( λ̂k ( x , y ) ) pk=1 a list of estimators of { λk ( x , y ) } p k=1 for some functions λ k : Rd × Rd → [ 0 , 1 ] , constructed independently from E . Take the following estimator of SM ( x , y ) : ŜM E , Λ ( x , y ) = p∑ k=1 λ̂k ( x , y ) ŜM k ( x , y ) + ( 1− p∑ k=1 λ̂k ( x , y ) ) ŜM p+1 ( x , y ) ( 8 ) In the next section , we explain in detail how base estimators are chosen . We call ŜM E , Λ ( x , y ) a hybrid random feature ( HRF ) estimator of SM ( x , y ) parameterized by E , Λ . The role of the λcoefficients is to dynamically ( based on the input ( x , y ) ) prioritize or deprioritize certain estimators to promote those which are characterized by lower variance for a given input . Note that if elements of E are unbiased estimators of SM ( x , y ) , then trivially ŜM E , Λ ( x , y ) is also an unbiased estimator of SM ( x , y ) . Assume that each ŜM k ( x , y ) is of the form ŜM k ( x , y ) = ( ϕk1 , m ( x ) ) ⊤ϕk2 , m ( y ) for ϕkj , m ( u ) = 1√ m ϕ1 , kj , m ( u ) ⋆ ... ⋆ ϕ tk , k j , m ( u ) , tk > 0 and ϕ 1 , k j , m , ... , ϕ tk , k j , m : Rd → Rm , where j ∈ { 1 , 2 } . Assume also that λk ( x , y ) can be written as : λk ( x , y ) = ak + Eτ∼Ω [ lk∑ i=1 f i1 , k ( x , τ ) f i 2 , k ( y , τ ) ] ( 9 ) for some scalars ak ∈ R , distribution Ω ∈ P ( Rd ) ( where P ( Rd stands for the set of probabilistic distributions on Rd ) , mappings ξik , ηik : Rd × Rd → R and that the corresponding estimator λ̂k ( x , y ) = λ̂kn ( x , y ) of λ k ( x , y ) is of the form : λ̂kn ( x , y ) = ak + ( ρ k 1 , n ( x ) ) ⊤ρk2 , n ( y ) ( 10 ) for ρkj , n ( u ) = 1√ n ρ1 , kj , n ( u ) ⋆ ... ⋆ ρ lk , k j , n ( u ) , ρ i , k j , n ( u ) = ( f i j , k ( u , τ1 ) , ... , f i j , k ( u , τn ) ) ⊤ , τ1 , ... , τn ∼ Ω , and where j ∈ { 1 , 2 } . Linearization of the λ-coefficients given by Equation 9 is crucial to obtain linearization of the hybrid estimators , and consequently random feature map decomposition . We denote a hybrid estimator using n random features for its base estimators and m to approximate λ-coefficients as ŜM hyb m , n . Furthermore , for two vectors u , v , we denote their vectorized outerproduct by u⊗ v. Finally , we denote by ∏⋆ vectors-concatenation . Estimator ŜMhybm , n ( x , y ) can be rewritten as a dot-product of two ( hybrid ) random feature vectors , as the next lemma shows . Lemma 2.3 . The HRF estimator ŜM hyb m , n ( x , y ) satisfies ŜM hyb m , n ( x , y ) = Ψ1 ( x ) ⊤Ψ2 ( y ) , where Ψj for j ∈ { 1 , 2 } is given as Ψj ( z ) = Ψ1j ( z ) ⋆Ψ2j ( z ) ⋆Ψ3j ( z ) ⋆Ψ4j ( z ) and : Ψ1j ( z ) = ⋆∏ k=1 , ... , p √ ak m ϕ1 , kj , m ( z ) ⋆ ... ⋆ ϕ tk , k j , m ( z ) Ψ2j ( z ) = 1√ mn ⋆∏ k=1 , ... , p ⋆∏ i , j∈ { 1 , ... , lk } × { 1 , ... , tk } ρi , kj , n ( z ) ⊗ ϕ j , k j , m ( z ) Ψ3j ( z ) = √ 1− ∑p k=1 ak m ϕ1 , p+1j , m ( z ) ⋆ ... ⋆ ϕ tp+1 , p+1 j , m ( z ) Ψ4j ( z ) = i√ mn ⋆∏ k=1 , ... , p ⋆∏ i , j∈ { 1 , ... , lk } × { 1 , ... , tp+1 } ρi , kj , n ( z ) ⊗ ϕ j , p+1 j , m ( z ) ( 11 ) Bipolar estimators : A prominent special case of the general hybrid estimator defined above is the one where : E = ( ŜM ++ ( x , y ) , ŜM trig ( x , y ) ) . Thus consider the following estimator ŜM hyb m , n : ŜM hyb m , n ( x , y ) = λ̂n ( x , y ) ŜM ++ m ( x , y ) + ( 1− λ̂n ( x , y ) ) ŜM trig m ( x , y ) . ( 12 ) The question arises whether ŜM hyb m , n defined in such a way can outperform both ŜM ++ m and ŜM trig m . That of course depends also on the choice of λ : Rd × Rd → R. If we consider a ( common ) normalized setting where all input vectors have the same norm , we can rewrite λ ( x , y ) ad λ ( θx , y , r ) , where θx , y is an angle between x and y and ∥x∥ = ∥y∥ = r. By our previous analysis we know that ŜM ++ m becomes perfect for θ = π and ŜM trig m becomes perfect for θ = 0 . That suggests particularly simple linear dependence of λ on θ to guarantee vanishing variance for both the critical values : θ = 0 and θ = π . It remains to show that such a λ-coefficient can be linearized . It turns out that this can be done with a particularly simple random feature map mechanism , as we will show later , leading to the so-called angular hybrid variant ( see : Section 2.4 ) . | The paper proposes a new type of estimators of softmax and gaussian kernels based on random features, consisting of compositions of base estimators such as the trigonometric and positive random features. The composition is a linear function, such that the new estimator (called hybrid) is unbiased, whose coefficients (called $\lambda$-coefficients) are independent of the base estimators and are also kernel functions (to better adapt the estimator to the compared data) hence also need to be linearized for scalability. Three different instantiations of the hybrid estimator are proposed using base estimators of the form of the defined complex exponential one (an asymmetric generalization of random feature estimators). In particular the angular one (i.e. $\lambda$-coefficients depending on the angle between the pair of data points) is shown to have low variance and low maximal relative error for small and large kernel values. HRF has also a lower computational complexity in the total number of random features w.r.t. non compositional estimators. Experiments are carried out on multiple applications, first to verify the approximation capability of the hybrid estimator w.r.t true kernel, and trigonometric and positive estimators in simple settings, then to show the improvement (either in quality scores or computational) of using the proposed estimator in models requiring the use of softmax at scale. | SP:764df73f9c404fa5adb8559a28b507206f174b87 |
Hybrid Random Features | 1 INTRODUCTION & RELATED WORK . Consider the softmax and Gaussian kernel functions K : Rd×d → R defined as follows : SM ( x , y ) def = exp ( x⊤y ) , Kgauss ( x , y ) def = exp ( −∥x− y∥ 2 2 2 ) . ( 1 ) These two are prominent examples of functions used in the so-called kernel methods ( Gretton et al. , 2005 ; Zhang et al. , 2018 ) and beyond , i.e . in softmax-sampling ( Blanc & Rendle , 2018 ) . Random features ( RFs , Rahimi & Recht , 2007 ; Liu et al. , 2020 ; Peng et al. , 2021 ) yield a powerful mechanism for linearizing and consequently scaling up kernel methods with dot-product kernel decompositions disentangling x from y in the formulae for kernel value K ( x , y ) ≈ ϕ ( x ) ⊤ϕ ( y ) via data-agnostic probabilistic ( random feature ) maps ϕ : Rd → Rm . The tight relationship between softmax and Gaussian kernels given by the transformation Kgauss ( x , y ) = exp ( −∥x∥ 2 2 2 ) SM ( x , y ) exp ( − ∥y∥22 2 ) provides a mapping of any random feature vector ϕSM ( u ) for the softmax kernel to the corresponding one ϕgauss ( u ) = exp ( −∥u∥ 2 2 2 ) ϕSM ( u ) for the Gaussian kernel , thus we will focus on the former kernel . The classic random feature map mechanism ϕtrigm for the softmax kernel , obtained from Bochner ’ s Theorem applied to the Gaussian kernel ( Rahimi & Recht , 2007 ) , is of the form : ϕtrigm ( u ) = 1√ m exp ( ∥u∥2 2 ) ( sin ( ω⊤1 u ) , ... , sin ( ω ⊤ mu ) , cos ( ω ⊤ 1 u ) , ... , cos ( ω ⊤ mu ) ) ⊤ , ( 2 ) where m stands for the number of random features and ω1 , ... , ωm iid∼ N ( 0 , Id ) . The above ( common in most downstream use-cases ) method for linearizing softmax/Gaussian kernels was recently shown to fail in some of the new impactful applications of scalable kernel methods such as implicit-attention Transformer architectures called Performers ( Choromanski et al. , 2021b ) . We denote by MSE the mean squared error of the estimator ( i.e . its variance since all estimators considered in this paper are unbiased ) . The above mechanism struggles to accurately approximate close-to-zero kernel values , as characterized by the particularly large MSE in that region . This is a crucial problem since most of the entries of the attention matrices in Transformers ’ models are very ∗Equal Contribution , Correspondence to kchoro @ google.com . †Authorship in alphabetical order ‡Independent researcher . Work done during postdoc at Yale University small and the approximators need to be particularly accurate there . Otherwise the renormalizers computed to make attention matrices row-stochastic ( standard attention normalization procedure in Transformers ) would be estimated imprecisely , potentially even by negative values . The solution to this problem was presented in FAVOR+ mechanism ( Choromanski et al. , 2021b ) , where a new positive random feature map for unbiased softmax kernel estimation was applied : ϕ++m ( u ) = 1√ 2m exp ( −∥u∥ 2 2 ) ( exp ( ω⊤1 u ) , ... , exp ( ω ⊤ mu ) , exp ( −ω⊤1 u ) , ... , exp ( −ω⊤mu ) ) ⊤ . ( 3 ) Even though , very accurate in estimating small softmax kernel values ( which turns out to be crucial in making RFs work for Transformers training ) , this mechanism is characterized by larger MSE for large kernel values . In several applications of softmax kernels ( in particular Transformers , where attention matrices typically admit sparse combinatorial structure with relatively few but critical large entries and several close-to-zero ones or softmax sampling ) the algorithm needs to process simultaneously very small and large kernel values . The following natural questions arise : Is it possible to get the best from both the mechanisms to obtain RFs-based estimators particularly accurate for both very small and large softmax kernel values ? Furthermore , can those estimators be designed to have low variance in more generally pre-defined regions ? . We give affirmative answers to both of the above questions by constructing a new class of random feature maps techniques called hybrid random features or HRFs . Theoretical methods used by us to develop HRFs : ( a ) provide a unifying perspective , where trigonometric random features from Bochner ’ s Theorem and novel mechanisms proposed in ( Choromanski et al. , 2021b ) are just special corollaries of the more general result from complex analysis , ( b ) integrate in the original way several other powerful probabilistic techniques such as Goemans-Willimson method ( Goemans & Williamson , 2004 ) and random features for the compositional kernels ( Daniely et al. , 2017 ) . We provide detailed theoretical analysis of HRFs , showing in particular that they provide strictly more accurate worst-case softmax kernel estimation than previous algorithms and lead to computational gains . We also conduct their thorough empirical evaluation on tasks ranging from pointwise kernel estimation to downstream problems involving training Transformer-models or even end-toend robotic controller-stacks including attention-based architectures . Related Work : The literature on different random feature map mechanisms for Gaussian ( and thus also softmax ) kernel estimation is voluminous . Most focus has been put on reducing the variance of trigonometric random features from ( Rahimi & Recht , 2007 ) via various Quasi Monte Carlo ( QMC ) methods , where directions and/or lengths of Gaussian vectors used to produce features are correlated , often through geometric conditions such as orthogonality ( Choromanski et al. , 2017 ; Rowland et al. , 2018 ; Yu et al. , 2016 ; Choromanski et al. , 2019 ; Choromanski & Sindhwani , 2016 ) . Our HRFs do not compete with those techniques ( and can be in fact easily combined with them ) since rather than focusing on improving sampling mechanism for a given approximation algorithm , they provide a completely new algorithm . The new application of random features for softmax kernel in Transformers proposed in ( Choromanski et al. , 2020 ; 2021b ) led to fruitful research on the extensions and limitations of these methods . Schlag et al . ( 2021 ) replaced random features by sparse deterministic constructions ( no longer approximating softmax kernel ) . Luo et al . ( 2021 ) observed that combining L2-normalization of queries and keys for variance reduction of softmax kernel estimation with FFT-based implementations of relative position encoding and FAVOR+ mechanism from Performers helps in training . Trigonometric random features were applied for softmax sampling in ( Rawat et al. , 2019 ) . Several other techniques such as Nyström method ( Yang et al. , 2012 ; Williams & Seeger , 2000 ; Rudi et al. , 2015 ) were proposed to construct data-dependent feature representations . Even though , as we show in Sec . 2.4 , certain instantiations of the HRF mechanism clearly benefit from some data analysis , our central goal remains an unbiased estimation of the softmax/Gaussian kernels which is no longer the case for those other techniques . 2 HYBRID RANDOM FEATURES . 2.1 PRELIMINARIES . Whenever we do not say explicitly otherwise , all presented lemmas and theorems are new . We start with the following basic definitions and results . Definition 2.1 ( Kernel with a Random Feature Map Representation ) . We say that a kernel function K : Rd × Rd → R admits a random feature ( RF ) map representation if it can be written as K ( x , y ) = Eω∼Ω [ l∑ i=1 ξi ( x , ω ) ξi ( y , ω ) ] , ( 4 ) for some ξi : Rd × Rd → R , and where ω is sampled from some probabilistic distribution Ω ∈ P ( Rd ) . The corresponding random feature map , for a given m ∈ N , is defined as ϕm ( u ) = 1√ m ϕ1m ( u ) ⋆ ... ⋆ ϕ l m ( u ) ∈ Rml , ( 5 ) where ϕim = ( ξi ( u , ω1 ) , ... , ξi ( u , ωm ) ) ⊤ , ⋆ stands for vertical concatenation , and ω1 , ... , ωm iid∼ Ω . Random feature maps can be used to unbiasedly approximate corresponding kernels , as follows : K̂ ( x , y ) = ϕm ( x ) ⊤ϕm ( y ) . ( 6 ) Using the above notation , a trigonometric random feature ϕtrigm from Equation 2 can be encoded as applying l = 2 , ξ1 ( u , ω ) = sin ( ω⊤u ) , ξ2 ( u , ω ) = cos ( ω⊤u ) . Similarly , positive random features can be encoded as taking l = 2 and ξ1 ( u , ω ) = 1√2 exp ( ω ⊤u ) , ξ2 ( u , ω ) = 1√2 exp ( −ω ⊤u ) . The following result from ( Choromanski et al. , 2021b ) shows that the mean squared error ( MSE ) of the trigonometric estimator is small for large softmax kernel values and large for small softmax kernel values , whereas an estimator applying positive random features behaves in the opposite way . Denote by ŜM trig m ( x , y ) an estimator of SM ( x , y ) for x , y ∈ Rd using trigonometric RFs and ω1 , ... , ωm iid∼ N ( 0 , Id ) . Denote by ŜM ++ m ( x , y ) its analogue using positive RFs . We have : Lemma 2.2 ( positive versus trigonometric RFs ) . Take ∆ = x − y , z = x + y , f1 ( u ) = ( 2m ) −1 exp ( u2 ) SM−2 ( x , y ) , f2 ( u ) = ( 2m ) −1 exp ( u2 ) SM2 ( x , y ) , f3 ( u ) = ( 1 − exp ( −u2 ) ) 2 . The MSEs of these estimators are : MSE ( ŜM trig m ( x , y ) ) = f1 ( ∥z∥2 ) f3 ( ∥∆∥2 ) , MSE ( ŜM ++ m ( x , y ) ) = f2 ( ∥z∥2 ) f3 ( ∥z∥2 ) . ( 7 ) 2.2 THE ALGORITHM . We are ready to present the mechanism of Hybrid Random Features ( HRFs ) . Denote by E = ( ŜM k ( x , y ) ) p+1k=1 a list of estimators of SM ( x , y ) ( the so-called base estimators ) and by Λ = ( λ̂k ( x , y ) ) pk=1 a list of estimators of { λk ( x , y ) } p k=1 for some functions λ k : Rd × Rd → [ 0 , 1 ] , constructed independently from E . Take the following estimator of SM ( x , y ) : ŜM E , Λ ( x , y ) = p∑ k=1 λ̂k ( x , y ) ŜM k ( x , y ) + ( 1− p∑ k=1 λ̂k ( x , y ) ) ŜM p+1 ( x , y ) ( 8 ) In the next section , we explain in detail how base estimators are chosen . We call ŜM E , Λ ( x , y ) a hybrid random feature ( HRF ) estimator of SM ( x , y ) parameterized by E , Λ . The role of the λcoefficients is to dynamically ( based on the input ( x , y ) ) prioritize or deprioritize certain estimators to promote those which are characterized by lower variance for a given input . Note that if elements of E are unbiased estimators of SM ( x , y ) , then trivially ŜM E , Λ ( x , y ) is also an unbiased estimator of SM ( x , y ) . Assume that each ŜM k ( x , y ) is of the form ŜM k ( x , y ) = ( ϕk1 , m ( x ) ) ⊤ϕk2 , m ( y ) for ϕkj , m ( u ) = 1√ m ϕ1 , kj , m ( u ) ⋆ ... ⋆ ϕ tk , k j , m ( u ) , tk > 0 and ϕ 1 , k j , m , ... , ϕ tk , k j , m : Rd → Rm , where j ∈ { 1 , 2 } . Assume also that λk ( x , y ) can be written as : λk ( x , y ) = ak + Eτ∼Ω [ lk∑ i=1 f i1 , k ( x , τ ) f i 2 , k ( y , τ ) ] ( 9 ) for some scalars ak ∈ R , distribution Ω ∈ P ( Rd ) ( where P ( Rd stands for the set of probabilistic distributions on Rd ) , mappings ξik , ηik : Rd × Rd → R and that the corresponding estimator λ̂k ( x , y ) = λ̂kn ( x , y ) of λ k ( x , y ) is of the form : λ̂kn ( x , y ) = ak + ( ρ k 1 , n ( x ) ) ⊤ρk2 , n ( y ) ( 10 ) for ρkj , n ( u ) = 1√ n ρ1 , kj , n ( u ) ⋆ ... ⋆ ρ lk , k j , n ( u ) , ρ i , k j , n ( u ) = ( f i j , k ( u , τ1 ) , ... , f i j , k ( u , τn ) ) ⊤ , τ1 , ... , τn ∼ Ω , and where j ∈ { 1 , 2 } . Linearization of the λ-coefficients given by Equation 9 is crucial to obtain linearization of the hybrid estimators , and consequently random feature map decomposition . We denote a hybrid estimator using n random features for its base estimators and m to approximate λ-coefficients as ŜM hyb m , n . Furthermore , for two vectors u , v , we denote their vectorized outerproduct by u⊗ v. Finally , we denote by ∏⋆ vectors-concatenation . Estimator ŜMhybm , n ( x , y ) can be rewritten as a dot-product of two ( hybrid ) random feature vectors , as the next lemma shows . Lemma 2.3 . The HRF estimator ŜM hyb m , n ( x , y ) satisfies ŜM hyb m , n ( x , y ) = Ψ1 ( x ) ⊤Ψ2 ( y ) , where Ψj for j ∈ { 1 , 2 } is given as Ψj ( z ) = Ψ1j ( z ) ⋆Ψ2j ( z ) ⋆Ψ3j ( z ) ⋆Ψ4j ( z ) and : Ψ1j ( z ) = ⋆∏ k=1 , ... , p √ ak m ϕ1 , kj , m ( z ) ⋆ ... ⋆ ϕ tk , k j , m ( z ) Ψ2j ( z ) = 1√ mn ⋆∏ k=1 , ... , p ⋆∏ i , j∈ { 1 , ... , lk } × { 1 , ... , tk } ρi , kj , n ( z ) ⊗ ϕ j , k j , m ( z ) Ψ3j ( z ) = √ 1− ∑p k=1 ak m ϕ1 , p+1j , m ( z ) ⋆ ... ⋆ ϕ tp+1 , p+1 j , m ( z ) Ψ4j ( z ) = i√ mn ⋆∏ k=1 , ... , p ⋆∏ i , j∈ { 1 , ... , lk } × { 1 , ... , tp+1 } ρi , kj , n ( z ) ⊗ ϕ j , p+1 j , m ( z ) ( 11 ) Bipolar estimators : A prominent special case of the general hybrid estimator defined above is the one where : E = ( ŜM ++ ( x , y ) , ŜM trig ( x , y ) ) . Thus consider the following estimator ŜM hyb m , n : ŜM hyb m , n ( x , y ) = λ̂n ( x , y ) ŜM ++ m ( x , y ) + ( 1− λ̂n ( x , y ) ) ŜM trig m ( x , y ) . ( 12 ) The question arises whether ŜM hyb m , n defined in such a way can outperform both ŜM ++ m and ŜM trig m . That of course depends also on the choice of λ : Rd × Rd → R. If we consider a ( common ) normalized setting where all input vectors have the same norm , we can rewrite λ ( x , y ) ad λ ( θx , y , r ) , where θx , y is an angle between x and y and ∥x∥ = ∥y∥ = r. By our previous analysis we know that ŜM ++ m becomes perfect for θ = π and ŜM trig m becomes perfect for θ = 0 . That suggests particularly simple linear dependence of λ on θ to guarantee vanishing variance for both the critical values : θ = 0 and θ = π . It remains to show that such a λ-coefficient can be linearized . It turns out that this can be done with a particularly simple random feature map mechanism , as we will show later , leading to the so-called angular hybrid variant ( see : Section 2.4 ) . | The paper makes two methodological contributions: a new approach to constructing randomized approximations to Gaussian and softmax kernels and a proposal to combine multiple randomized approximations to create “hybrid random features” (HRFs). The idea of the hybrid random features is to activate a particular approximation for pairs (x,y) for which that approximation is accurate. Some theoretical results support the use of a particular instance of HRFs that combine trigonometric and positive random features, showing the hybrid has lower mean squared error than either type of random feature. Experiments are presented to verify the benefits of HRFs on a variety of tasks, with a focus on their use in neural network settings. | SP:764df73f9c404fa5adb8559a28b507206f174b87 |
Hybrid Random Features | 1 INTRODUCTION & RELATED WORK . Consider the softmax and Gaussian kernel functions K : Rd×d → R defined as follows : SM ( x , y ) def = exp ( x⊤y ) , Kgauss ( x , y ) def = exp ( −∥x− y∥ 2 2 2 ) . ( 1 ) These two are prominent examples of functions used in the so-called kernel methods ( Gretton et al. , 2005 ; Zhang et al. , 2018 ) and beyond , i.e . in softmax-sampling ( Blanc & Rendle , 2018 ) . Random features ( RFs , Rahimi & Recht , 2007 ; Liu et al. , 2020 ; Peng et al. , 2021 ) yield a powerful mechanism for linearizing and consequently scaling up kernel methods with dot-product kernel decompositions disentangling x from y in the formulae for kernel value K ( x , y ) ≈ ϕ ( x ) ⊤ϕ ( y ) via data-agnostic probabilistic ( random feature ) maps ϕ : Rd → Rm . The tight relationship between softmax and Gaussian kernels given by the transformation Kgauss ( x , y ) = exp ( −∥x∥ 2 2 2 ) SM ( x , y ) exp ( − ∥y∥22 2 ) provides a mapping of any random feature vector ϕSM ( u ) for the softmax kernel to the corresponding one ϕgauss ( u ) = exp ( −∥u∥ 2 2 2 ) ϕSM ( u ) for the Gaussian kernel , thus we will focus on the former kernel . The classic random feature map mechanism ϕtrigm for the softmax kernel , obtained from Bochner ’ s Theorem applied to the Gaussian kernel ( Rahimi & Recht , 2007 ) , is of the form : ϕtrigm ( u ) = 1√ m exp ( ∥u∥2 2 ) ( sin ( ω⊤1 u ) , ... , sin ( ω ⊤ mu ) , cos ( ω ⊤ 1 u ) , ... , cos ( ω ⊤ mu ) ) ⊤ , ( 2 ) where m stands for the number of random features and ω1 , ... , ωm iid∼ N ( 0 , Id ) . The above ( common in most downstream use-cases ) method for linearizing softmax/Gaussian kernels was recently shown to fail in some of the new impactful applications of scalable kernel methods such as implicit-attention Transformer architectures called Performers ( Choromanski et al. , 2021b ) . We denote by MSE the mean squared error of the estimator ( i.e . its variance since all estimators considered in this paper are unbiased ) . The above mechanism struggles to accurately approximate close-to-zero kernel values , as characterized by the particularly large MSE in that region . This is a crucial problem since most of the entries of the attention matrices in Transformers ’ models are very ∗Equal Contribution , Correspondence to kchoro @ google.com . †Authorship in alphabetical order ‡Independent researcher . Work done during postdoc at Yale University small and the approximators need to be particularly accurate there . Otherwise the renormalizers computed to make attention matrices row-stochastic ( standard attention normalization procedure in Transformers ) would be estimated imprecisely , potentially even by negative values . The solution to this problem was presented in FAVOR+ mechanism ( Choromanski et al. , 2021b ) , where a new positive random feature map for unbiased softmax kernel estimation was applied : ϕ++m ( u ) = 1√ 2m exp ( −∥u∥ 2 2 ) ( exp ( ω⊤1 u ) , ... , exp ( ω ⊤ mu ) , exp ( −ω⊤1 u ) , ... , exp ( −ω⊤mu ) ) ⊤ . ( 3 ) Even though , very accurate in estimating small softmax kernel values ( which turns out to be crucial in making RFs work for Transformers training ) , this mechanism is characterized by larger MSE for large kernel values . In several applications of softmax kernels ( in particular Transformers , where attention matrices typically admit sparse combinatorial structure with relatively few but critical large entries and several close-to-zero ones or softmax sampling ) the algorithm needs to process simultaneously very small and large kernel values . The following natural questions arise : Is it possible to get the best from both the mechanisms to obtain RFs-based estimators particularly accurate for both very small and large softmax kernel values ? Furthermore , can those estimators be designed to have low variance in more generally pre-defined regions ? . We give affirmative answers to both of the above questions by constructing a new class of random feature maps techniques called hybrid random features or HRFs . Theoretical methods used by us to develop HRFs : ( a ) provide a unifying perspective , where trigonometric random features from Bochner ’ s Theorem and novel mechanisms proposed in ( Choromanski et al. , 2021b ) are just special corollaries of the more general result from complex analysis , ( b ) integrate in the original way several other powerful probabilistic techniques such as Goemans-Willimson method ( Goemans & Williamson , 2004 ) and random features for the compositional kernels ( Daniely et al. , 2017 ) . We provide detailed theoretical analysis of HRFs , showing in particular that they provide strictly more accurate worst-case softmax kernel estimation than previous algorithms and lead to computational gains . We also conduct their thorough empirical evaluation on tasks ranging from pointwise kernel estimation to downstream problems involving training Transformer-models or even end-toend robotic controller-stacks including attention-based architectures . Related Work : The literature on different random feature map mechanisms for Gaussian ( and thus also softmax ) kernel estimation is voluminous . Most focus has been put on reducing the variance of trigonometric random features from ( Rahimi & Recht , 2007 ) via various Quasi Monte Carlo ( QMC ) methods , where directions and/or lengths of Gaussian vectors used to produce features are correlated , often through geometric conditions such as orthogonality ( Choromanski et al. , 2017 ; Rowland et al. , 2018 ; Yu et al. , 2016 ; Choromanski et al. , 2019 ; Choromanski & Sindhwani , 2016 ) . Our HRFs do not compete with those techniques ( and can be in fact easily combined with them ) since rather than focusing on improving sampling mechanism for a given approximation algorithm , they provide a completely new algorithm . The new application of random features for softmax kernel in Transformers proposed in ( Choromanski et al. , 2020 ; 2021b ) led to fruitful research on the extensions and limitations of these methods . Schlag et al . ( 2021 ) replaced random features by sparse deterministic constructions ( no longer approximating softmax kernel ) . Luo et al . ( 2021 ) observed that combining L2-normalization of queries and keys for variance reduction of softmax kernel estimation with FFT-based implementations of relative position encoding and FAVOR+ mechanism from Performers helps in training . Trigonometric random features were applied for softmax sampling in ( Rawat et al. , 2019 ) . Several other techniques such as Nyström method ( Yang et al. , 2012 ; Williams & Seeger , 2000 ; Rudi et al. , 2015 ) were proposed to construct data-dependent feature representations . Even though , as we show in Sec . 2.4 , certain instantiations of the HRF mechanism clearly benefit from some data analysis , our central goal remains an unbiased estimation of the softmax/Gaussian kernels which is no longer the case for those other techniques . 2 HYBRID RANDOM FEATURES . 2.1 PRELIMINARIES . Whenever we do not say explicitly otherwise , all presented lemmas and theorems are new . We start with the following basic definitions and results . Definition 2.1 ( Kernel with a Random Feature Map Representation ) . We say that a kernel function K : Rd × Rd → R admits a random feature ( RF ) map representation if it can be written as K ( x , y ) = Eω∼Ω [ l∑ i=1 ξi ( x , ω ) ξi ( y , ω ) ] , ( 4 ) for some ξi : Rd × Rd → R , and where ω is sampled from some probabilistic distribution Ω ∈ P ( Rd ) . The corresponding random feature map , for a given m ∈ N , is defined as ϕm ( u ) = 1√ m ϕ1m ( u ) ⋆ ... ⋆ ϕ l m ( u ) ∈ Rml , ( 5 ) where ϕim = ( ξi ( u , ω1 ) , ... , ξi ( u , ωm ) ) ⊤ , ⋆ stands for vertical concatenation , and ω1 , ... , ωm iid∼ Ω . Random feature maps can be used to unbiasedly approximate corresponding kernels , as follows : K̂ ( x , y ) = ϕm ( x ) ⊤ϕm ( y ) . ( 6 ) Using the above notation , a trigonometric random feature ϕtrigm from Equation 2 can be encoded as applying l = 2 , ξ1 ( u , ω ) = sin ( ω⊤u ) , ξ2 ( u , ω ) = cos ( ω⊤u ) . Similarly , positive random features can be encoded as taking l = 2 and ξ1 ( u , ω ) = 1√2 exp ( ω ⊤u ) , ξ2 ( u , ω ) = 1√2 exp ( −ω ⊤u ) . The following result from ( Choromanski et al. , 2021b ) shows that the mean squared error ( MSE ) of the trigonometric estimator is small for large softmax kernel values and large for small softmax kernel values , whereas an estimator applying positive random features behaves in the opposite way . Denote by ŜM trig m ( x , y ) an estimator of SM ( x , y ) for x , y ∈ Rd using trigonometric RFs and ω1 , ... , ωm iid∼ N ( 0 , Id ) . Denote by ŜM ++ m ( x , y ) its analogue using positive RFs . We have : Lemma 2.2 ( positive versus trigonometric RFs ) . Take ∆ = x − y , z = x + y , f1 ( u ) = ( 2m ) −1 exp ( u2 ) SM−2 ( x , y ) , f2 ( u ) = ( 2m ) −1 exp ( u2 ) SM2 ( x , y ) , f3 ( u ) = ( 1 − exp ( −u2 ) ) 2 . The MSEs of these estimators are : MSE ( ŜM trig m ( x , y ) ) = f1 ( ∥z∥2 ) f3 ( ∥∆∥2 ) , MSE ( ŜM ++ m ( x , y ) ) = f2 ( ∥z∥2 ) f3 ( ∥z∥2 ) . ( 7 ) 2.2 THE ALGORITHM . We are ready to present the mechanism of Hybrid Random Features ( HRFs ) . Denote by E = ( ŜM k ( x , y ) ) p+1k=1 a list of estimators of SM ( x , y ) ( the so-called base estimators ) and by Λ = ( λ̂k ( x , y ) ) pk=1 a list of estimators of { λk ( x , y ) } p k=1 for some functions λ k : Rd × Rd → [ 0 , 1 ] , constructed independently from E . Take the following estimator of SM ( x , y ) : ŜM E , Λ ( x , y ) = p∑ k=1 λ̂k ( x , y ) ŜM k ( x , y ) + ( 1− p∑ k=1 λ̂k ( x , y ) ) ŜM p+1 ( x , y ) ( 8 ) In the next section , we explain in detail how base estimators are chosen . We call ŜM E , Λ ( x , y ) a hybrid random feature ( HRF ) estimator of SM ( x , y ) parameterized by E , Λ . The role of the λcoefficients is to dynamically ( based on the input ( x , y ) ) prioritize or deprioritize certain estimators to promote those which are characterized by lower variance for a given input . Note that if elements of E are unbiased estimators of SM ( x , y ) , then trivially ŜM E , Λ ( x , y ) is also an unbiased estimator of SM ( x , y ) . Assume that each ŜM k ( x , y ) is of the form ŜM k ( x , y ) = ( ϕk1 , m ( x ) ) ⊤ϕk2 , m ( y ) for ϕkj , m ( u ) = 1√ m ϕ1 , kj , m ( u ) ⋆ ... ⋆ ϕ tk , k j , m ( u ) , tk > 0 and ϕ 1 , k j , m , ... , ϕ tk , k j , m : Rd → Rm , where j ∈ { 1 , 2 } . Assume also that λk ( x , y ) can be written as : λk ( x , y ) = ak + Eτ∼Ω [ lk∑ i=1 f i1 , k ( x , τ ) f i 2 , k ( y , τ ) ] ( 9 ) for some scalars ak ∈ R , distribution Ω ∈ P ( Rd ) ( where P ( Rd stands for the set of probabilistic distributions on Rd ) , mappings ξik , ηik : Rd × Rd → R and that the corresponding estimator λ̂k ( x , y ) = λ̂kn ( x , y ) of λ k ( x , y ) is of the form : λ̂kn ( x , y ) = ak + ( ρ k 1 , n ( x ) ) ⊤ρk2 , n ( y ) ( 10 ) for ρkj , n ( u ) = 1√ n ρ1 , kj , n ( u ) ⋆ ... ⋆ ρ lk , k j , n ( u ) , ρ i , k j , n ( u ) = ( f i j , k ( u , τ1 ) , ... , f i j , k ( u , τn ) ) ⊤ , τ1 , ... , τn ∼ Ω , and where j ∈ { 1 , 2 } . Linearization of the λ-coefficients given by Equation 9 is crucial to obtain linearization of the hybrid estimators , and consequently random feature map decomposition . We denote a hybrid estimator using n random features for its base estimators and m to approximate λ-coefficients as ŜM hyb m , n . Furthermore , for two vectors u , v , we denote their vectorized outerproduct by u⊗ v. Finally , we denote by ∏⋆ vectors-concatenation . Estimator ŜMhybm , n ( x , y ) can be rewritten as a dot-product of two ( hybrid ) random feature vectors , as the next lemma shows . Lemma 2.3 . The HRF estimator ŜM hyb m , n ( x , y ) satisfies ŜM hyb m , n ( x , y ) = Ψ1 ( x ) ⊤Ψ2 ( y ) , where Ψj for j ∈ { 1 , 2 } is given as Ψj ( z ) = Ψ1j ( z ) ⋆Ψ2j ( z ) ⋆Ψ3j ( z ) ⋆Ψ4j ( z ) and : Ψ1j ( z ) = ⋆∏ k=1 , ... , p √ ak m ϕ1 , kj , m ( z ) ⋆ ... ⋆ ϕ tk , k j , m ( z ) Ψ2j ( z ) = 1√ mn ⋆∏ k=1 , ... , p ⋆∏ i , j∈ { 1 , ... , lk } × { 1 , ... , tk } ρi , kj , n ( z ) ⊗ ϕ j , k j , m ( z ) Ψ3j ( z ) = √ 1− ∑p k=1 ak m ϕ1 , p+1j , m ( z ) ⋆ ... ⋆ ϕ tp+1 , p+1 j , m ( z ) Ψ4j ( z ) = i√ mn ⋆∏ k=1 , ... , p ⋆∏ i , j∈ { 1 , ... , lk } × { 1 , ... , tp+1 } ρi , kj , n ( z ) ⊗ ϕ j , p+1 j , m ( z ) ( 11 ) Bipolar estimators : A prominent special case of the general hybrid estimator defined above is the one where : E = ( ŜM ++ ( x , y ) , ŜM trig ( x , y ) ) . Thus consider the following estimator ŜM hyb m , n : ŜM hyb m , n ( x , y ) = λ̂n ( x , y ) ŜM ++ m ( x , y ) + ( 1− λ̂n ( x , y ) ) ŜM trig m ( x , y ) . ( 12 ) The question arises whether ŜM hyb m , n defined in such a way can outperform both ŜM ++ m and ŜM trig m . That of course depends also on the choice of λ : Rd × Rd → R. If we consider a ( common ) normalized setting where all input vectors have the same norm , we can rewrite λ ( x , y ) ad λ ( θx , y , r ) , where θx , y is an angle between x and y and ∥x∥ = ∥y∥ = r. By our previous analysis we know that ŜM ++ m becomes perfect for θ = π and ŜM trig m becomes perfect for θ = 0 . That suggests particularly simple linear dependence of λ on θ to guarantee vanishing variance for both the critical values : θ = 0 and θ = π . It remains to show that such a λ-coefficient can be linearized . It turns out that this can be done with a particularly simple random feature map mechanism , as we will show later , leading to the so-called angular hybrid variant ( see : Section 2.4 ) . | Author rebuttal checked. I increase my score to 8. BTW, I strongly suggest the authors polish this paper as the current version appears uneasy to follow, especifically for researchers unfamiliar with RFF. ============================================================= This paper proposes a hybrid random features that combines the classical trigonometric and positive random features, which aims to achieve good approximation performance for softmax kernel with both small and large values. This framework is also applicable to Gaussian kernels. Numerical experiments on softmax kernel approximation in linear attention and robotics are conducted to support their algorithm and theory. | SP:764df73f9c404fa5adb8559a28b507206f174b87 |
Recursive Disentanglement Network | 1 INTRODUCTION . Recent progress in machine learning demonstrates the ability to learn disentangled representations is essential to data-efficient learning , such as controllable image generation , image manipulation , and domain adaptation ( Suter et al. , 2019 ; Zhu et al. , 2018 ; Peng et al. , 2019 ; Gabbay & Hoshen , 2021 ; 2019 ) . β-VAE ( Higgins et al. , 2017 ) and its variants are the most investigated approaches for disentangled representation learning . Recent works on β-VAE-based methods introduce various inductive biases as regularization terms and directly apply them on the resulting embedding space of deep models , such as the bottleneck capacity constraint ( Higgins et al. , 2017 ; Burgess et al. , 2018 ) , total correlation among variables ( Kim & Mnih , 2018 ; Chen et al. , 2018 ) , and the mismatch between aggregated posterior and prior ( Kumar et al. , 2017 ) , aiming to balance among representation capacity , independence constraints , and reconstruction accuracy . Indeed , as demonstrated by Locatello et al . ( 2020 ; 2019 ) , unsupervised disentanglement is fundamentally impossible without explicit inductive biases on models and data sets . However , our study shows that existing β-VAE-based methods may not be able to learn satisfactory disentangled representations even for fairly trivial cases . This is due to the fact that the feature spaces of deep models have inherently compositional structures , i.e. , each complex feature is a composition of primitive features . However , existing methods with regularization terms solely applied to the resulting embedding space can not effectively propagate disentangled regularization across such compositional feature space . As shown in Figure 1 , applying the standard β-VAE to the widely used dataset dSprites ( Matthey et al. , 2017 ) , we visualize the resulting representation z , as well as its compositional low-level representations m extracted from the previous layer ( as shown in Figure 1 ( a ) ) , and evaluate the independence between each pair of m and each pair of z , respectively 1 . Figure 1 ( b ) and Figure 1 ( c ) show that the disentanglement quality of low-level features m may impact the resulting representation z in terms of disentanglement quality . This study demonstrates the potential benefit to regularize the compositional feature space of deep models during disentangled representation learning . This work aims to tackle the compositional disentanglement learning problem . First , we formulate disentangled representation learning from an information-theoretic perspective , and introduce a new learning objective covering three essential properties for learning disentangled representations : sufficiency , minimal sufficiency , and disentanglement . Theoretical analysis shows that the 1The independence between two components ci and cj is measured by the normalized mutual information ( Chen et al. , 2018 ) . Whenever NMI ( ci ; cj ) = I ( ci ; cj ) /H ( c ) = 0 , ci and cj are independent ( disentangled ) . proposed learning objective is a general form of β-VAE and several of its state-of-the-art variants . Next , we extend the proposed learning objective to cover the disentanglement representation learning problem in the compositional feature space . Governed by the proposed learning objective , we present Recursive Disentanglement Network ( RecurD ) , a compositional disentanglement learning method , which directs the disentanglement learning process across the compositional feature space by applying regulatory inductive bias recursively through the feed-forward network . We argue that the recursive propagation of inductive bias through the feed-forward network imposes a sufficient condition of disentangled representation learning . Empirical studies demonstrate that RecurD outperforms β-VAE ( Higgins et al. , 2017 ) and several other variants of VAE ( Burgess et al. , 2018 ; Kim & Mnih , 2018 ; Chen et al. , 2018 ; Kumar et al. , 2017 ) on disentangled representation learning and achieves more data-efficient learning in downstream machine learning tasks . 2 COMPOSITIONAL DISENTANGLEMENT LEARNING . In this section , we first formulate disentanglement learning from the information-theoretic perspective by introducing three key properties and show such formulation is a general form of the optimization objectives of β-VAE and several of its variants . Next , we extend the principled objective to the compositional feature space to tackle the compositional disentanglement learning problem . 2.1 DISENTANGLEMENT LEARNING OBJECTIVE . The challenge of representation learning can be formulated as finding a distribution p ( z|x ) that maps original data x ∈ X into a representation z with fixed amount of variables z = { z1 , . . . , zn } ( Bengio et al. , 2013 ) . The key intuition of z is to capture minimal sufficient information in a disentangled manner , given the reconstruction task x ≈ x̂ . We denote the representation learning process as a Markov Chain for which x̂→ x→ z , which means z depends on x̂ only through x , i.e. , p ( z|x ) = p ( z|x , x̂ ) ( see also , ( Cover , 1999 ; Achille & Soatto , 2018 ) ) . The principled properties of z are defined as follows : Definition 1 . Sufficiency : a representation z of x for x̂ is sufficient if I ( x , x̂ ) = I ( z , x̂ ) . For the reconstruction task , z is sufficient if it can successfully reconstruct x by x̂ . The difference between I ( x ; x̂ ) and I ( z ; x̂ ) is computed as follows : I ( x ; x̂ ) − I ( z ; x̂ ) = I ( x ; x̂|z ) = H ( x̂|z ) − H ( x̂|x ) . Given the reconstruction task x ≈ x̂ , H ( x̂|x ) is constant and independent to z , so the sufficient property can be optimized by minimizing H ( x̂|z ) ( Federici et al. , 2020 ; Dubois et al. , 2020 ) . Definition 2 . Minimal Sufficiency : a representation z of x is minimal sufficient if I ( x ; z ) = I ( z ; x̂ ) . A minimal sufficient z encodes the minimum amount of information about x required to reconstruct x̂ ( Cover , 1999 ; Achille & Soatto , 2018 ) . Since I ( z ; x̂ ) equals to I ( x ; x̂ ) when z is sufficient , the difference is computed as I ( x ; z ) − I ( z ; x̂ ) = I ( x ; z ) − I ( x ; x̂ ) . Given the reconstruction task x ≈ x̂ , I ( x ; x̂ ) is constant and independent to z , so the minimal sufficiency property can be optimized by minimizing I ( x ; z ) . Definition 3 . Disentanglement : a representation denoted as z = { z1 , . . . , zn } is disentangled if∑ j 6=i I ( zi ; zj ) = 0 . From the definition of mutual information , I ( zi ; zj ) = H ( zi ) − H ( zi|zj ) denotes the reduction of uncertainty in zi when zj is observed ( Cover , 1999 ) . If any two components zi and zj are disentangled , changes to zi have no influence on zj , which means I ( zi ; zj ) = 0 . A representation satisfying all these properties can be found by introducing two Lagrange multipliers λ1 and λ2 for two constrained expected properties with respect to the fundamental sufficiency property . The principled objective of disentanglement learning is to minimize the following objective : L = H ( x̂|z ) + λ1I ( x ; z ) + λ2 ∑ j 6=i I ( zi , zj ) . ( 1 ) The above objective can be interpreted as the reconstruction error , plus two regularizers that yield an optimally disentangled representation . The principled objective also helps us analyze and understand the success of recently developed β-VAE-based methods . These methods operate with an encoder with parameter φ and a decoder with parameter θ , to induce the joint distributions q ( x , z ) = qφ ( z|x ) q ( x ) and p ( x , z ) = pθ ( x|z ) p ( z ) , respectively , where p ( z ) is a fixed prior distribution . The learning objective of β-VAE contains the reconstruction error and KL divergence between the variational posterior and prior . To understand the relationship of learning objectives between Equation 1 and β-VAE-based methods , we decompose I ( x ; z ) ( Kim & Mnih , 2018 ) and estimate an upper bound for ∑ j 6=i I ( zi , zj ) ( Te Sun , 1980 ; 1975 ) , then we assign different weights as follows : λ1I ( x ; z ) + λ2 ∑ j 6=i I ( zi , zj ) ≤ λaEx [ KL ( q ( z|x ) ‖p ( z ) ) ] + λbKL ( q ( z ) ‖ n∏ j=1 q ( zj ) ) + λc n∑ j=1 KL ( q ( zj ) ‖p ( zj ) ) . ( 2 ) As shown in Table 1 , the learning objectives of β-VAE and its four variants can be regarded as specific cases of Equation 1 , i.e. , they assign different weights to our regularization terms , which can balance among latent variables capacity , independence constraints , and reconstruction accuracy , leading to successful disentangled representation learning ( Zhao et al. , 2017 ; Li et al. , 2020 ) . More details can be found in Appendix B . However , in these works , the inductive bias toward disentanglement is only applied to the embedding space of z , ignoring the need of disentanglement during feature composition in feed-forward networks . 2.2 COMPOSITIONAL OBJECTIVE . Considering an encoder with L layers to encode original data x into disentangled representation z . Let us denote ml as the input features of the l-layer , which are divided into groups of features , i.e. , ml = ∪jmlj , where mlj is the j-th feature subset . We formulate the compositional relation between features from two consecutive layers as follows : ml+1j = Layer ( ml ×wlj ) . Layer can be any neural network layers , e.g. , the commonly used convolution layer in computer vision tasks . The compositional relation is achieved by a composition matrix wl ∈ Rdl×dl+1 , so that features in ml are divided into dl+1 groups after passing through all compositional vectors ( wljs ) . Note that m l+1 j is only related to the subset of features from m l selected by wlj . Similar to Section 2.1 , we assume that the learning process of ml+1 depends on ml through original data x , denoted as the Markov chain : ml → x → ml+1 . It can be written as ml+1 → x → ml according to the conditional independence implied by the Markovity ( Cover , 1999 ) . Consider two output features ml+1i , m l+1 j ∈ ml+1 and their corresponding input feature subsets mli , mlj ∈ ml , we define key notions as follows : Definition 4 . Compositional Disentanglement : mli and mlj are disentangled if I ( mli ; m l j ) = 0 . A disentangled representation of mli and m l j may improve the disentanglement quality between ml+1i and m l+1 j . Similar to Definition 3 , we can achieve compositional disentanglement by mini- mizing I ( mli ; m l j ) . Definition 5 . Compositional Minimal Sufficiency : Assume that the learning process of ml+1j is denoted by the Markov chain : ml+1j → x→ ( mli , m l j ) . Given the original data x , an input feature set mlj for the output feature m l+1 j is minimal sufficient if I ( x ; ml+1j ) = I ( mlj ; m l+1 j ) . For the output feature ml+1j , the input feature set m l j is sufficient and another input feature set m l i is superfluous when mlj is able to capture all information of m l+1 j as well as the original data x . Furthermore , according to Data-Processing Inequality ( DPI ) ( Cover , 1999 ; Achille & Soatto , 2018 ) in the Markov chain , there exists an inequality that : I ( x ; ml+1j ) ≥ I ( ml+1j ; m l i , m l j ) = I ( ml+1j ; m l j ) + I ( ml+1j ; m l i|mlj ) = I ( ml+1j ; m l j ) + I ( ml+1j ; m l i ) − I ( mli ; m l j ) , ( 3 ) where the difference between I ( x ; ml+1j ) and I ( ml+1j ; m l j ) is equivalent to the difference be- tween I ( ml+1j ; m l i ) and I ( mli ; m l j ) . Therefore , matching I ( ml+1j ; m l i ) to I ( mli ; m l j ) can yield a minimal sufficient representation mlj for m l+1 j . Based on the definition of compositional disen- tanglement , we can optimize the minimal sufficiency by forcing I ( ml+1j ; m l i ) to be 0 . To learn disentangled representation via effectively regularizing the compositional feature space , we augment the principled learning objective ( Equation 1 ) with compositional regularizers . Therefore , the compositional learning objective for disentangled representation is defined as follows : L = H ( x̂|mL+1 ) ︸ ︷︷ ︸ sufficient +λ1 L∑ l=2 dl+1∑ j 6=i I ( mli ; m l+1 j ) ︸ ︷︷ ︸ minimal sufficient +λ2 L+1∑ l=2 dl+1∑ j 6=i I ( mli , m l j ) ︸ ︷︷ ︸ disentangled , ( 4 ) where mL+1 denotes the final disentangled representation z . Our intuition is that disentangled learning for compositional feature space could benefit the disentanglement learning for high-level representations . | This paper formulates the disentanglement problem from an information theoretic perspective, but focusing on an objective that encourages a compositional disentangled feature space on the layers that precede the final latents. With objective, the authors describe a new method using Gate of Mixture-of-Experts to implement the compositional disentangled reconstruction objective. Some of the terms require mutual information estimation, for which they use MINE estimators. They run experiments across dSprites and 3DShapes and look into reconstruction error and different disentanglement metrics, observing that they method outperform existing beta-VAE-like baselines, without any compositional incentives. They also analyse the loss components with different architectures and observe that degrees of compositionally in the architecture yields better disentanglement. Finally, they look into some ablations of the regularisation pressure and into data efficiency in downstream tasks. | SP:fa4f870df8698e525dcf33e1183afb281509af79 |
Recursive Disentanglement Network | 1 INTRODUCTION . Recent progress in machine learning demonstrates the ability to learn disentangled representations is essential to data-efficient learning , such as controllable image generation , image manipulation , and domain adaptation ( Suter et al. , 2019 ; Zhu et al. , 2018 ; Peng et al. , 2019 ; Gabbay & Hoshen , 2021 ; 2019 ) . β-VAE ( Higgins et al. , 2017 ) and its variants are the most investigated approaches for disentangled representation learning . Recent works on β-VAE-based methods introduce various inductive biases as regularization terms and directly apply them on the resulting embedding space of deep models , such as the bottleneck capacity constraint ( Higgins et al. , 2017 ; Burgess et al. , 2018 ) , total correlation among variables ( Kim & Mnih , 2018 ; Chen et al. , 2018 ) , and the mismatch between aggregated posterior and prior ( Kumar et al. , 2017 ) , aiming to balance among representation capacity , independence constraints , and reconstruction accuracy . Indeed , as demonstrated by Locatello et al . ( 2020 ; 2019 ) , unsupervised disentanglement is fundamentally impossible without explicit inductive biases on models and data sets . However , our study shows that existing β-VAE-based methods may not be able to learn satisfactory disentangled representations even for fairly trivial cases . This is due to the fact that the feature spaces of deep models have inherently compositional structures , i.e. , each complex feature is a composition of primitive features . However , existing methods with regularization terms solely applied to the resulting embedding space can not effectively propagate disentangled regularization across such compositional feature space . As shown in Figure 1 , applying the standard β-VAE to the widely used dataset dSprites ( Matthey et al. , 2017 ) , we visualize the resulting representation z , as well as its compositional low-level representations m extracted from the previous layer ( as shown in Figure 1 ( a ) ) , and evaluate the independence between each pair of m and each pair of z , respectively 1 . Figure 1 ( b ) and Figure 1 ( c ) show that the disentanglement quality of low-level features m may impact the resulting representation z in terms of disentanglement quality . This study demonstrates the potential benefit to regularize the compositional feature space of deep models during disentangled representation learning . This work aims to tackle the compositional disentanglement learning problem . First , we formulate disentangled representation learning from an information-theoretic perspective , and introduce a new learning objective covering three essential properties for learning disentangled representations : sufficiency , minimal sufficiency , and disentanglement . Theoretical analysis shows that the 1The independence between two components ci and cj is measured by the normalized mutual information ( Chen et al. , 2018 ) . Whenever NMI ( ci ; cj ) = I ( ci ; cj ) /H ( c ) = 0 , ci and cj are independent ( disentangled ) . proposed learning objective is a general form of β-VAE and several of its state-of-the-art variants . Next , we extend the proposed learning objective to cover the disentanglement representation learning problem in the compositional feature space . Governed by the proposed learning objective , we present Recursive Disentanglement Network ( RecurD ) , a compositional disentanglement learning method , which directs the disentanglement learning process across the compositional feature space by applying regulatory inductive bias recursively through the feed-forward network . We argue that the recursive propagation of inductive bias through the feed-forward network imposes a sufficient condition of disentangled representation learning . Empirical studies demonstrate that RecurD outperforms β-VAE ( Higgins et al. , 2017 ) and several other variants of VAE ( Burgess et al. , 2018 ; Kim & Mnih , 2018 ; Chen et al. , 2018 ; Kumar et al. , 2017 ) on disentangled representation learning and achieves more data-efficient learning in downstream machine learning tasks . 2 COMPOSITIONAL DISENTANGLEMENT LEARNING . In this section , we first formulate disentanglement learning from the information-theoretic perspective by introducing three key properties and show such formulation is a general form of the optimization objectives of β-VAE and several of its variants . Next , we extend the principled objective to the compositional feature space to tackle the compositional disentanglement learning problem . 2.1 DISENTANGLEMENT LEARNING OBJECTIVE . The challenge of representation learning can be formulated as finding a distribution p ( z|x ) that maps original data x ∈ X into a representation z with fixed amount of variables z = { z1 , . . . , zn } ( Bengio et al. , 2013 ) . The key intuition of z is to capture minimal sufficient information in a disentangled manner , given the reconstruction task x ≈ x̂ . We denote the representation learning process as a Markov Chain for which x̂→ x→ z , which means z depends on x̂ only through x , i.e. , p ( z|x ) = p ( z|x , x̂ ) ( see also , ( Cover , 1999 ; Achille & Soatto , 2018 ) ) . The principled properties of z are defined as follows : Definition 1 . Sufficiency : a representation z of x for x̂ is sufficient if I ( x , x̂ ) = I ( z , x̂ ) . For the reconstruction task , z is sufficient if it can successfully reconstruct x by x̂ . The difference between I ( x ; x̂ ) and I ( z ; x̂ ) is computed as follows : I ( x ; x̂ ) − I ( z ; x̂ ) = I ( x ; x̂|z ) = H ( x̂|z ) − H ( x̂|x ) . Given the reconstruction task x ≈ x̂ , H ( x̂|x ) is constant and independent to z , so the sufficient property can be optimized by minimizing H ( x̂|z ) ( Federici et al. , 2020 ; Dubois et al. , 2020 ) . Definition 2 . Minimal Sufficiency : a representation z of x is minimal sufficient if I ( x ; z ) = I ( z ; x̂ ) . A minimal sufficient z encodes the minimum amount of information about x required to reconstruct x̂ ( Cover , 1999 ; Achille & Soatto , 2018 ) . Since I ( z ; x̂ ) equals to I ( x ; x̂ ) when z is sufficient , the difference is computed as I ( x ; z ) − I ( z ; x̂ ) = I ( x ; z ) − I ( x ; x̂ ) . Given the reconstruction task x ≈ x̂ , I ( x ; x̂ ) is constant and independent to z , so the minimal sufficiency property can be optimized by minimizing I ( x ; z ) . Definition 3 . Disentanglement : a representation denoted as z = { z1 , . . . , zn } is disentangled if∑ j 6=i I ( zi ; zj ) = 0 . From the definition of mutual information , I ( zi ; zj ) = H ( zi ) − H ( zi|zj ) denotes the reduction of uncertainty in zi when zj is observed ( Cover , 1999 ) . If any two components zi and zj are disentangled , changes to zi have no influence on zj , which means I ( zi ; zj ) = 0 . A representation satisfying all these properties can be found by introducing two Lagrange multipliers λ1 and λ2 for two constrained expected properties with respect to the fundamental sufficiency property . The principled objective of disentanglement learning is to minimize the following objective : L = H ( x̂|z ) + λ1I ( x ; z ) + λ2 ∑ j 6=i I ( zi , zj ) . ( 1 ) The above objective can be interpreted as the reconstruction error , plus two regularizers that yield an optimally disentangled representation . The principled objective also helps us analyze and understand the success of recently developed β-VAE-based methods . These methods operate with an encoder with parameter φ and a decoder with parameter θ , to induce the joint distributions q ( x , z ) = qφ ( z|x ) q ( x ) and p ( x , z ) = pθ ( x|z ) p ( z ) , respectively , where p ( z ) is a fixed prior distribution . The learning objective of β-VAE contains the reconstruction error and KL divergence between the variational posterior and prior . To understand the relationship of learning objectives between Equation 1 and β-VAE-based methods , we decompose I ( x ; z ) ( Kim & Mnih , 2018 ) and estimate an upper bound for ∑ j 6=i I ( zi , zj ) ( Te Sun , 1980 ; 1975 ) , then we assign different weights as follows : λ1I ( x ; z ) + λ2 ∑ j 6=i I ( zi , zj ) ≤ λaEx [ KL ( q ( z|x ) ‖p ( z ) ) ] + λbKL ( q ( z ) ‖ n∏ j=1 q ( zj ) ) + λc n∑ j=1 KL ( q ( zj ) ‖p ( zj ) ) . ( 2 ) As shown in Table 1 , the learning objectives of β-VAE and its four variants can be regarded as specific cases of Equation 1 , i.e. , they assign different weights to our regularization terms , which can balance among latent variables capacity , independence constraints , and reconstruction accuracy , leading to successful disentangled representation learning ( Zhao et al. , 2017 ; Li et al. , 2020 ) . More details can be found in Appendix B . However , in these works , the inductive bias toward disentanglement is only applied to the embedding space of z , ignoring the need of disentanglement during feature composition in feed-forward networks . 2.2 COMPOSITIONAL OBJECTIVE . Considering an encoder with L layers to encode original data x into disentangled representation z . Let us denote ml as the input features of the l-layer , which are divided into groups of features , i.e. , ml = ∪jmlj , where mlj is the j-th feature subset . We formulate the compositional relation between features from two consecutive layers as follows : ml+1j = Layer ( ml ×wlj ) . Layer can be any neural network layers , e.g. , the commonly used convolution layer in computer vision tasks . The compositional relation is achieved by a composition matrix wl ∈ Rdl×dl+1 , so that features in ml are divided into dl+1 groups after passing through all compositional vectors ( wljs ) . Note that m l+1 j is only related to the subset of features from m l selected by wlj . Similar to Section 2.1 , we assume that the learning process of ml+1 depends on ml through original data x , denoted as the Markov chain : ml → x → ml+1 . It can be written as ml+1 → x → ml according to the conditional independence implied by the Markovity ( Cover , 1999 ) . Consider two output features ml+1i , m l+1 j ∈ ml+1 and their corresponding input feature subsets mli , mlj ∈ ml , we define key notions as follows : Definition 4 . Compositional Disentanglement : mli and mlj are disentangled if I ( mli ; m l j ) = 0 . A disentangled representation of mli and m l j may improve the disentanglement quality between ml+1i and m l+1 j . Similar to Definition 3 , we can achieve compositional disentanglement by mini- mizing I ( mli ; m l j ) . Definition 5 . Compositional Minimal Sufficiency : Assume that the learning process of ml+1j is denoted by the Markov chain : ml+1j → x→ ( mli , m l j ) . Given the original data x , an input feature set mlj for the output feature m l+1 j is minimal sufficient if I ( x ; ml+1j ) = I ( mlj ; m l+1 j ) . For the output feature ml+1j , the input feature set m l j is sufficient and another input feature set m l i is superfluous when mlj is able to capture all information of m l+1 j as well as the original data x . Furthermore , according to Data-Processing Inequality ( DPI ) ( Cover , 1999 ; Achille & Soatto , 2018 ) in the Markov chain , there exists an inequality that : I ( x ; ml+1j ) ≥ I ( ml+1j ; m l i , m l j ) = I ( ml+1j ; m l j ) + I ( ml+1j ; m l i|mlj ) = I ( ml+1j ; m l j ) + I ( ml+1j ; m l i ) − I ( mli ; m l j ) , ( 3 ) where the difference between I ( x ; ml+1j ) and I ( ml+1j ; m l j ) is equivalent to the difference be- tween I ( ml+1j ; m l i ) and I ( mli ; m l j ) . Therefore , matching I ( ml+1j ; m l i ) to I ( mli ; m l j ) can yield a minimal sufficient representation mlj for m l+1 j . Based on the definition of compositional disen- tanglement , we can optimize the minimal sufficiency by forcing I ( ml+1j ; m l i ) to be 0 . To learn disentangled representation via effectively regularizing the compositional feature space , we augment the principled learning objective ( Equation 1 ) with compositional regularizers . Therefore , the compositional learning objective for disentangled representation is defined as follows : L = H ( x̂|mL+1 ) ︸ ︷︷ ︸ sufficient +λ1 L∑ l=2 dl+1∑ j 6=i I ( mli ; m l+1 j ) ︸ ︷︷ ︸ minimal sufficient +λ2 L+1∑ l=2 dl+1∑ j 6=i I ( mli , m l j ) ︸ ︷︷ ︸ disentangled , ( 4 ) where mL+1 denotes the final disentangled representation z . Our intuition is that disentangled learning for compositional feature space could benefit the disentanglement learning for high-level representations . | The paper proposes a new approach for learning disentangled variational autoencoders. In addition to pushing the sufficiency, minimal sufficiency, and disentanglement of the latent representation, the paper proposes to also regularize those on earlier features in the network. Experiments demonstrate promising results. | SP:fa4f870df8698e525dcf33e1183afb281509af79 |
Recursive Disentanglement Network | 1 INTRODUCTION . Recent progress in machine learning demonstrates the ability to learn disentangled representations is essential to data-efficient learning , such as controllable image generation , image manipulation , and domain adaptation ( Suter et al. , 2019 ; Zhu et al. , 2018 ; Peng et al. , 2019 ; Gabbay & Hoshen , 2021 ; 2019 ) . β-VAE ( Higgins et al. , 2017 ) and its variants are the most investigated approaches for disentangled representation learning . Recent works on β-VAE-based methods introduce various inductive biases as regularization terms and directly apply them on the resulting embedding space of deep models , such as the bottleneck capacity constraint ( Higgins et al. , 2017 ; Burgess et al. , 2018 ) , total correlation among variables ( Kim & Mnih , 2018 ; Chen et al. , 2018 ) , and the mismatch between aggregated posterior and prior ( Kumar et al. , 2017 ) , aiming to balance among representation capacity , independence constraints , and reconstruction accuracy . Indeed , as demonstrated by Locatello et al . ( 2020 ; 2019 ) , unsupervised disentanglement is fundamentally impossible without explicit inductive biases on models and data sets . However , our study shows that existing β-VAE-based methods may not be able to learn satisfactory disentangled representations even for fairly trivial cases . This is due to the fact that the feature spaces of deep models have inherently compositional structures , i.e. , each complex feature is a composition of primitive features . However , existing methods with regularization terms solely applied to the resulting embedding space can not effectively propagate disentangled regularization across such compositional feature space . As shown in Figure 1 , applying the standard β-VAE to the widely used dataset dSprites ( Matthey et al. , 2017 ) , we visualize the resulting representation z , as well as its compositional low-level representations m extracted from the previous layer ( as shown in Figure 1 ( a ) ) , and evaluate the independence between each pair of m and each pair of z , respectively 1 . Figure 1 ( b ) and Figure 1 ( c ) show that the disentanglement quality of low-level features m may impact the resulting representation z in terms of disentanglement quality . This study demonstrates the potential benefit to regularize the compositional feature space of deep models during disentangled representation learning . This work aims to tackle the compositional disentanglement learning problem . First , we formulate disentangled representation learning from an information-theoretic perspective , and introduce a new learning objective covering three essential properties for learning disentangled representations : sufficiency , minimal sufficiency , and disentanglement . Theoretical analysis shows that the 1The independence between two components ci and cj is measured by the normalized mutual information ( Chen et al. , 2018 ) . Whenever NMI ( ci ; cj ) = I ( ci ; cj ) /H ( c ) = 0 , ci and cj are independent ( disentangled ) . proposed learning objective is a general form of β-VAE and several of its state-of-the-art variants . Next , we extend the proposed learning objective to cover the disentanglement representation learning problem in the compositional feature space . Governed by the proposed learning objective , we present Recursive Disentanglement Network ( RecurD ) , a compositional disentanglement learning method , which directs the disentanglement learning process across the compositional feature space by applying regulatory inductive bias recursively through the feed-forward network . We argue that the recursive propagation of inductive bias through the feed-forward network imposes a sufficient condition of disentangled representation learning . Empirical studies demonstrate that RecurD outperforms β-VAE ( Higgins et al. , 2017 ) and several other variants of VAE ( Burgess et al. , 2018 ; Kim & Mnih , 2018 ; Chen et al. , 2018 ; Kumar et al. , 2017 ) on disentangled representation learning and achieves more data-efficient learning in downstream machine learning tasks . 2 COMPOSITIONAL DISENTANGLEMENT LEARNING . In this section , we first formulate disentanglement learning from the information-theoretic perspective by introducing three key properties and show such formulation is a general form of the optimization objectives of β-VAE and several of its variants . Next , we extend the principled objective to the compositional feature space to tackle the compositional disentanglement learning problem . 2.1 DISENTANGLEMENT LEARNING OBJECTIVE . The challenge of representation learning can be formulated as finding a distribution p ( z|x ) that maps original data x ∈ X into a representation z with fixed amount of variables z = { z1 , . . . , zn } ( Bengio et al. , 2013 ) . The key intuition of z is to capture minimal sufficient information in a disentangled manner , given the reconstruction task x ≈ x̂ . We denote the representation learning process as a Markov Chain for which x̂→ x→ z , which means z depends on x̂ only through x , i.e. , p ( z|x ) = p ( z|x , x̂ ) ( see also , ( Cover , 1999 ; Achille & Soatto , 2018 ) ) . The principled properties of z are defined as follows : Definition 1 . Sufficiency : a representation z of x for x̂ is sufficient if I ( x , x̂ ) = I ( z , x̂ ) . For the reconstruction task , z is sufficient if it can successfully reconstruct x by x̂ . The difference between I ( x ; x̂ ) and I ( z ; x̂ ) is computed as follows : I ( x ; x̂ ) − I ( z ; x̂ ) = I ( x ; x̂|z ) = H ( x̂|z ) − H ( x̂|x ) . Given the reconstruction task x ≈ x̂ , H ( x̂|x ) is constant and independent to z , so the sufficient property can be optimized by minimizing H ( x̂|z ) ( Federici et al. , 2020 ; Dubois et al. , 2020 ) . Definition 2 . Minimal Sufficiency : a representation z of x is minimal sufficient if I ( x ; z ) = I ( z ; x̂ ) . A minimal sufficient z encodes the minimum amount of information about x required to reconstruct x̂ ( Cover , 1999 ; Achille & Soatto , 2018 ) . Since I ( z ; x̂ ) equals to I ( x ; x̂ ) when z is sufficient , the difference is computed as I ( x ; z ) − I ( z ; x̂ ) = I ( x ; z ) − I ( x ; x̂ ) . Given the reconstruction task x ≈ x̂ , I ( x ; x̂ ) is constant and independent to z , so the minimal sufficiency property can be optimized by minimizing I ( x ; z ) . Definition 3 . Disentanglement : a representation denoted as z = { z1 , . . . , zn } is disentangled if∑ j 6=i I ( zi ; zj ) = 0 . From the definition of mutual information , I ( zi ; zj ) = H ( zi ) − H ( zi|zj ) denotes the reduction of uncertainty in zi when zj is observed ( Cover , 1999 ) . If any two components zi and zj are disentangled , changes to zi have no influence on zj , which means I ( zi ; zj ) = 0 . A representation satisfying all these properties can be found by introducing two Lagrange multipliers λ1 and λ2 for two constrained expected properties with respect to the fundamental sufficiency property . The principled objective of disentanglement learning is to minimize the following objective : L = H ( x̂|z ) + λ1I ( x ; z ) + λ2 ∑ j 6=i I ( zi , zj ) . ( 1 ) The above objective can be interpreted as the reconstruction error , plus two regularizers that yield an optimally disentangled representation . The principled objective also helps us analyze and understand the success of recently developed β-VAE-based methods . These methods operate with an encoder with parameter φ and a decoder with parameter θ , to induce the joint distributions q ( x , z ) = qφ ( z|x ) q ( x ) and p ( x , z ) = pθ ( x|z ) p ( z ) , respectively , where p ( z ) is a fixed prior distribution . The learning objective of β-VAE contains the reconstruction error and KL divergence between the variational posterior and prior . To understand the relationship of learning objectives between Equation 1 and β-VAE-based methods , we decompose I ( x ; z ) ( Kim & Mnih , 2018 ) and estimate an upper bound for ∑ j 6=i I ( zi , zj ) ( Te Sun , 1980 ; 1975 ) , then we assign different weights as follows : λ1I ( x ; z ) + λ2 ∑ j 6=i I ( zi , zj ) ≤ λaEx [ KL ( q ( z|x ) ‖p ( z ) ) ] + λbKL ( q ( z ) ‖ n∏ j=1 q ( zj ) ) + λc n∑ j=1 KL ( q ( zj ) ‖p ( zj ) ) . ( 2 ) As shown in Table 1 , the learning objectives of β-VAE and its four variants can be regarded as specific cases of Equation 1 , i.e. , they assign different weights to our regularization terms , which can balance among latent variables capacity , independence constraints , and reconstruction accuracy , leading to successful disentangled representation learning ( Zhao et al. , 2017 ; Li et al. , 2020 ) . More details can be found in Appendix B . However , in these works , the inductive bias toward disentanglement is only applied to the embedding space of z , ignoring the need of disentanglement during feature composition in feed-forward networks . 2.2 COMPOSITIONAL OBJECTIVE . Considering an encoder with L layers to encode original data x into disentangled representation z . Let us denote ml as the input features of the l-layer , which are divided into groups of features , i.e. , ml = ∪jmlj , where mlj is the j-th feature subset . We formulate the compositional relation between features from two consecutive layers as follows : ml+1j = Layer ( ml ×wlj ) . Layer can be any neural network layers , e.g. , the commonly used convolution layer in computer vision tasks . The compositional relation is achieved by a composition matrix wl ∈ Rdl×dl+1 , so that features in ml are divided into dl+1 groups after passing through all compositional vectors ( wljs ) . Note that m l+1 j is only related to the subset of features from m l selected by wlj . Similar to Section 2.1 , we assume that the learning process of ml+1 depends on ml through original data x , denoted as the Markov chain : ml → x → ml+1 . It can be written as ml+1 → x → ml according to the conditional independence implied by the Markovity ( Cover , 1999 ) . Consider two output features ml+1i , m l+1 j ∈ ml+1 and their corresponding input feature subsets mli , mlj ∈ ml , we define key notions as follows : Definition 4 . Compositional Disentanglement : mli and mlj are disentangled if I ( mli ; m l j ) = 0 . A disentangled representation of mli and m l j may improve the disentanglement quality between ml+1i and m l+1 j . Similar to Definition 3 , we can achieve compositional disentanglement by mini- mizing I ( mli ; m l j ) . Definition 5 . Compositional Minimal Sufficiency : Assume that the learning process of ml+1j is denoted by the Markov chain : ml+1j → x→ ( mli , m l j ) . Given the original data x , an input feature set mlj for the output feature m l+1 j is minimal sufficient if I ( x ; ml+1j ) = I ( mlj ; m l+1 j ) . For the output feature ml+1j , the input feature set m l j is sufficient and another input feature set m l i is superfluous when mlj is able to capture all information of m l+1 j as well as the original data x . Furthermore , according to Data-Processing Inequality ( DPI ) ( Cover , 1999 ; Achille & Soatto , 2018 ) in the Markov chain , there exists an inequality that : I ( x ; ml+1j ) ≥ I ( ml+1j ; m l i , m l j ) = I ( ml+1j ; m l j ) + I ( ml+1j ; m l i|mlj ) = I ( ml+1j ; m l j ) + I ( ml+1j ; m l i ) − I ( mli ; m l j ) , ( 3 ) where the difference between I ( x ; ml+1j ) and I ( ml+1j ; m l j ) is equivalent to the difference be- tween I ( ml+1j ; m l i ) and I ( mli ; m l j ) . Therefore , matching I ( ml+1j ; m l i ) to I ( mli ; m l j ) can yield a minimal sufficient representation mlj for m l+1 j . Based on the definition of compositional disen- tanglement , we can optimize the minimal sufficiency by forcing I ( ml+1j ; m l i ) to be 0 . To learn disentangled representation via effectively regularizing the compositional feature space , we augment the principled learning objective ( Equation 1 ) with compositional regularizers . Therefore , the compositional learning objective for disentangled representation is defined as follows : L = H ( x̂|mL+1 ) ︸ ︷︷ ︸ sufficient +λ1 L∑ l=2 dl+1∑ j 6=i I ( mli ; m l+1 j ) ︸ ︷︷ ︸ minimal sufficient +λ2 L+1∑ l=2 dl+1∑ j 6=i I ( mli , m l j ) ︸ ︷︷ ︸ disentangled , ( 4 ) where mL+1 denotes the final disentangled representation z . Our intuition is that disentangled learning for compositional feature space could benefit the disentanglement learning for high-level representations . | The paper presents a VAE variant that disentangles the features of the inference network at every layer, with disentanglement defined in terms of mutual information between features. The approach is implemented as "recursive disentanglement network" based on a switch network (aka Mixture-of-Experts gate, introduced in Shazeer 2017 and used in the switching transformers). The results in dSprites and 3DShapes dataset suggest this variant performs better than well-known disentanglement VAE networks (from few years back) in dSprites and well in 3DShapes (though not the best in all measures). In terms of VAE loss, the approach is presented as a generalization of various other disentangling VAEs. | SP:fa4f870df8698e525dcf33e1183afb281509af79 |
The Power of Contrast for Feature Learning: A Theoretical Analysis | 1 INTRODUCTION . Deep supervised learning has achieved great success in various applications , including computer vision ( Krizhevsky et al. , 2012 ) , natural language processing ( Devlin et al. , 2018 ) , and scientific computing ( Han et al. , 2018 ) . However , its dependence on manually assigned labels , which is usually difficult and costly , has motivated research into alternative approaches to exploit unlabeled data . Self-supervised learning is a promising approach that leverages the unlabeled data itself as supervision and learns representations that are beneficial to potential downstream tasks . At a high level , there are two common approaches for feature extraction in self-supervised learning : generative and contrastive ( Liu et al. , 2021 ) . Both approaches aim to learn latent representations of the original data , while the difference is that the generative approach focused on minimizing the reconstruction error from latent representations , and the contrastive approach targets to decrease the similarity between the representations of contrastive pairs . Recent works have shown the benefits of contrastive learning in practice ( Chen et al. , 2020a ; He et al. , 2020 ; Chen et al. , 2020b ; c ) . However , why the contrastive approach outperforms the generative approach remains mysterious . Additionally , recent works aim to further improve contrastive learning by introducing the label information . Specifically , Khosla et al . ( 2020 ) proposed the supervised contrastive learning , where the contrasting procedures are performed across different classes rather than different instances . With the help of label information , their proposed method outperforms self-supervised contrastive learning and classical cross entropy based supervised learning . However , despite this improvement on in-domain downstream tasks , Islam et al . ( 2021 ) found that such improvement in transfer learning is limited and even negative for such supervised contrastive learning . This phenomenon motivates us to rethink the role of labeled data in the contrastive learning framework . In this paper , we first compare contrastive learning with a representative method in the generative approach – the autoencoders . Specifically , we initialize the investigation in the linear representation setting , which has been widely adopted in theory to shed light upon complex machine learning phenomena such as in Du et al . ( 2020 ) ; Tripuraneni et al . ( 2021 ) . We provide a theoretical analysis of their feature learning performances on the spiked covariance model ( Bai & Yao , 2012 ; Yao et al. , 2015 ; Zhang et al. , 2018 ) and theoretically justify why contrastive learning outperforms autoencoders—contrastive learning is able to remove more noises by constructing contrastive samples . Then we investigate the role of label information in the contrastive learning framework and provide a theoretical justification of why labeled data help to gain accuracy in same-domain classification while can hurt multi-task transfer learning . Related works The idea of contrastive learning was first proposed in Hadsell et al . ( 2006 ) as an effective method to perform dimension reduction . Following this line of research , Dosovitskiy et al . ( 2014 ) proposed to perform instance discrimination by creating surrogate classes for each instance and Wu et al . ( 2018 ) further proposed to preserve a memory bank as a dictionary of negative samples . Other extensions based on this memory bank approach include He et al . ( 2020 ) ; Misra & Maaten ( 2020 ) ; Tian et al . ( 2020 ) ; Chen et al . ( 2020c ) . Rather than keeping a costly memory bank , another line of works exploits the benefit of mini-batch training where different samples are treated as negative to each other Ye et al . ( 2019 ) ; Chen et al . ( 2020a ) . Moreover , Khosla et al . ( 2020 ) explores the supervised version of contrastive learning where pairs are generated based on label information . Despite its success in practice , theoretical understanding of contrastive learning is still limited . Previous works provide provable guarantees for contrastive learning under conditional independence assumption ( or its variants ) ( Arora et al. , 2019 ; Lee et al. , 2020 ; Tosh et al. , 2021 ; Tsai et al. , 2020 ) . Specifically , they assume the two contrastive views are independent conditioned on the label and show that contrastive learning can provably learn representations beneficial for downstream tasks . In addition to this line of research , Wang & Isola ( 2020 ) ; Graf et al . ( 2021 ) investigated the representation geometry of supervised contrastive loss , and HaoChen et al . ( 2021 ) provided analysis via a novel concept of augmentation graph with a new loss function that performs spectral decomposition on such graph . Moreover , Wen & Li ( 2021 ) considered the representation learning under the sparse coding model and studied the optimization properties on shallow ReLU neural networks . Different from all previous works , which aim to show that contrastive learning can learn useful representation , our paper aim to explain why contrastive learning outperforms other representation learning methods and also shed light on the role of labeled data in contrastive learning framework , which is under-explored in prior works . 2 PRELIMINARIES . Notations In this paper , we use O , Ω , Θ to hide universal constants and we write ak . bk for two sequences of positive numbers { ak } and { bk } if and only if there exists an universal constant C > 0 such that ak < Cbk for any k. We use ‖ · ‖ , ‖ · ‖2 , ‖ · ‖F to represent the ` 2 norm of vectors , spectral norm of matrices and Frobenius norm of matrices respectively . Let Od , r be a set of d × r orthogonal matrices . i.e. , Od , r , { U ∈ Rd×r : U > U = Ir } . We use |A| to denote the cardinality of a set A . For any n ∈ N+ , let [ n ] = { 1 , 2 , · · · , n } . We use ‖ sin Θ ( U1 , U2 ) ‖F to refer to the sine distance between two orthogonal matrices U1 , U2 ∈ Od , r , which is defined by : ‖sin Θ ( U1 , U2 ) ‖F , ∥∥U > 1⊥U2∥∥F . More properties of sine distance can be found in Section A.1 . We use { ei } di=1 to denote the canonical basis in d-dimensional Euclidean space Rd , that is , ei is the vector whose i-th coordinate is 1 and all the other coordinates are 0 . Let I { A } be an indicator function that takes 1 when A is true , otherwise takes 0 . We write a∨ b and a∧ b to denote max ( a , b ) and min ( a , b ) , respectively . 2.1 SETUP . Given an input x ∈ Rd , contrastive learning aims to learn a low dimensional representation h = f ( x ; θ ) ∈ Rr by contrasting different samples , i.e. , maximizing the agreement between positive pairs , and minimizing the agreement between negative pairs . Suppose we have n data points X = [ x1 , x2 , · · · , xn ] ∈ Rd×n from the population distribution D. The contrastive learning task can be formulated to be an optimization problem : min θ L ( θ ) = min θ 1 n n∑ i=1 ` ( xi , BPosi , B Neg i ; f ( · , θ ) ) + λR ( θ ) , ( 2.1 ) where ` ( · ) is a contrastive loss and λR ( θ ) is a regularization term ; BPosi , B Neg i are the sets of positive samples and negative samples corresponding to xi , which we will describe in detail below . Losses and Models . We then present the model setup considered in this paper . ( a ) . Linear representation and regularization term . We consider the linear representation function f ( x , W ) = Wx , where the parameter θ is a matrix W ∈ Rr×d . Since regularization techniques have been widely adapted in contrastive learning practice ( Chen et al. , 2020a ; He et al. , 2020 ; Grill et al. , 2020 ) , we further consider penalizing the representation by a regularization term R ( W ) = ‖WW > ‖2F /2 to encourage the orthogonality of W and therefore promote the diversity of wi to learn different representations . ( b ) . Triplet Contrastive loss . The contrastive loss is set to be the average similarity between positive pairs minus that between negative pairs : ` ( x , BPos , BNeg , f ( · , θ ) ) = − ∑ xPos∈BPos 〈f ( x , θ ) , f ( xPos , θ ) 〉 |BPos| + ∑ xNeg∈BNeg 〈f ( x , θ ) , f ( xNeg , θ ) 〉 |BNeg| , ( 2.2 ) where BPos , Bneg are sets of positive samples and negative samples corresponding to x . This loss has been commonly used in constrastive learning ( Hadsell et al. , 2006 ) and metric learning ( Schroff et al. , 2015 ; He et al. , 2018 ) . In Khosla et al . ( 2020 ) , the authors show that it is an approximation of the NT-Xent contrastive loss , which has been highlighted in recent contrastive learning practice ( Sohn , 2016 ; Wu et al. , 2018 ; Oord et al. , 2018 ; Chen et al. , 2020a ) . ( c ) . Generation of positive and negative pairs . There are two common approaches to generate such pairs , depending on whether or not label information is available . When the label information is not available , the typical strategy is to generate different views of the original data via augmentation ( Hadsell et al. , 2006 ; Chen et al. , 2020a ) . Two views of the same data point serve as positive pair for each other while those of different data serve as negative pairs . Definition 2.1 ( Augmented pairs generation ) . Given two augmentation functions g1 , g2 : Rd → Rd and n training samples B = { xi } i∈ [ n ] , the augmented views are given by : { ( g1 ( xi ) , g2 ( xi ) ) } i∈ [ n ] . Then for each view gv ( xi ) , v = 1 , 2 , the corresponding positive samples and negative samples are defined by : BPosi , v = { gs ( xi ) : s ∈ [ 2 ] \ { v } } and B Neg i , v = { gs ( xj ) : s ∈ [ 2 ] , j ∈ [ n ] \ { i } } . The loss function of self-supervised contrastive learning problem can be written as : LSelfCon ( W ) = − 1 2n n∑ i=1 2∑ v=1 [ 〈Wgv ( xi ) , Wg [ 2 ] \ { v } ( xi ) 〉− ∑ j 6=i 2∑ s=1 〈Wgv ( xi ) , Wgs ( xj ) 〉 2n− 2 ] + λ 2 ‖WW > ‖2F . ( 2.3 ) In particular , we adopt the following augmentation in our analysis . Definition 2.2 ( Random masking augmentation ) . The two views of the original data are generated by randomly dividing its dimensions to two sets , that is , g1 ( xi ) = Axi , and g2 ( xi ) = ( I − A ) xi , where A = diag ( a1 , · · · , ad ) ∈ Rd×d is the diagonal masking matrix with { ai } di=1 being i.i.d . random variables sampled from a Bernoulli distribution with mean 1/2 . A similar augmentation was considered in Wen & Li ( 2021 ) . However , our primary interest lies in comparing the performance of contrastive learning against autoencoders and analyzing the role of labeled data , while their work focuses on understanding the training process of neural networks in contrastive learning . When the label information is available , Khosla et al . ( 2020 ) proposed the following approach to generate pairs . Definition 2.3 ( Supervised pairs generation ) . In a K-class classification problem , given nk samples for each class k ∈ [ K ] : { xki : i ∈ [ nk ] } Kk=1 and let n = ∑K k=1 nk , the corresponding positive samples and negative samples for xki are defined by BPosi , k = { xkj : j ∈ [ nk ] \ i } and B Neg i , k = { xsj : s ∈ [ K ] \ k , j ∈ [ ns ] } . That is , the positive samples are the remaining ones in the same class with xki and the negative samples are the samples from different classes . Correspondingly , the loss function of the supervised contrastive learning problem can be written as : LSupCon ( W ) = − 1 nK K∑ k=1 n∑ i=1 [ ∑ j 6=i 〈Wxki , Wxkj 〉 n− 1 − n∑ j=1 ∑ s6=k 〈Wxki , Wxsj〉 n ( K − 1 ) ] + λ 2 ‖WW > ‖2F . ( 2.4 ) ( d ) . Spiked Covariance Model . We consider the following spiked covariance model ( Bai & Yao , 2012 ; Yao et al. , 2015 ; Zhang et al. , 2018 ) to study the power of contrastive learning : x = U ? z + ξ , Cov ( z ) = ν2Ir , Cov ( ξ ) = Σ , ( 2.5 ) where z ∈ Rr and ξ ∈ Rd are both zero mean sub-Gaussian random variables . In particular , U ? ∈ Od , r and Σ = diag ( σ21 , · · · , σ2d ) . The first term U ? z represents the signal of interest residing in a low-dimensional subspace spanned by the columns of U ? . The second term ξ is the dense noise with heteroskedastic noise . Given that , the ideal low-dimensional representation is to compress the observed x into a low-dimensional representation spanned by the columns of U ? . In this paper , we aim to learn a good projection W ∈ Rr×d onto a lower-dimensional subspace from observation x . Since the information of W is invariant with the transformation W ← OW for any invertible matrix O ∈ Rr , r , the essential information of W is contained in the right eigenvector of W . Thus we quantify the goodness of the representation W by the sine distance ‖ sin Θ ( U , U ? ) ‖F , where U is the top-r right eigenspace of W . | This paper studies the generative power of contrastive learning from a theoretical perspective. To enable the analysis, this paper considers the linear model with random masking augmentation. Training data are assumed to be generated by the spiked covariance model. The main result is that contrastive learning can recovery the core features since the random masking augmentation can mitigate the influence of random noise on the diagonal entries in the covariance matrix, while the autoencoder is unable to recover the core features due to the large noise. Then this paper shows that downstream excess risk can be upper bounded by contrastive learning but cannot be less than a universal constant for an autoencoder. Beyond the unsupervised setting, this paper also studies the role of labeled data in supervised contrastive learning and gives the upper bound of sine distance between the supervised contrastive learned model and the true model. | SP:fe8fd7951434a33d616866d3b025534970341327 |
The Power of Contrast for Feature Learning: A Theoretical Analysis | 1 INTRODUCTION . Deep supervised learning has achieved great success in various applications , including computer vision ( Krizhevsky et al. , 2012 ) , natural language processing ( Devlin et al. , 2018 ) , and scientific computing ( Han et al. , 2018 ) . However , its dependence on manually assigned labels , which is usually difficult and costly , has motivated research into alternative approaches to exploit unlabeled data . Self-supervised learning is a promising approach that leverages the unlabeled data itself as supervision and learns representations that are beneficial to potential downstream tasks . At a high level , there are two common approaches for feature extraction in self-supervised learning : generative and contrastive ( Liu et al. , 2021 ) . Both approaches aim to learn latent representations of the original data , while the difference is that the generative approach focused on minimizing the reconstruction error from latent representations , and the contrastive approach targets to decrease the similarity between the representations of contrastive pairs . Recent works have shown the benefits of contrastive learning in practice ( Chen et al. , 2020a ; He et al. , 2020 ; Chen et al. , 2020b ; c ) . However , why the contrastive approach outperforms the generative approach remains mysterious . Additionally , recent works aim to further improve contrastive learning by introducing the label information . Specifically , Khosla et al . ( 2020 ) proposed the supervised contrastive learning , where the contrasting procedures are performed across different classes rather than different instances . With the help of label information , their proposed method outperforms self-supervised contrastive learning and classical cross entropy based supervised learning . However , despite this improvement on in-domain downstream tasks , Islam et al . ( 2021 ) found that such improvement in transfer learning is limited and even negative for such supervised contrastive learning . This phenomenon motivates us to rethink the role of labeled data in the contrastive learning framework . In this paper , we first compare contrastive learning with a representative method in the generative approach – the autoencoders . Specifically , we initialize the investigation in the linear representation setting , which has been widely adopted in theory to shed light upon complex machine learning phenomena such as in Du et al . ( 2020 ) ; Tripuraneni et al . ( 2021 ) . We provide a theoretical analysis of their feature learning performances on the spiked covariance model ( Bai & Yao , 2012 ; Yao et al. , 2015 ; Zhang et al. , 2018 ) and theoretically justify why contrastive learning outperforms autoencoders—contrastive learning is able to remove more noises by constructing contrastive samples . Then we investigate the role of label information in the contrastive learning framework and provide a theoretical justification of why labeled data help to gain accuracy in same-domain classification while can hurt multi-task transfer learning . Related works The idea of contrastive learning was first proposed in Hadsell et al . ( 2006 ) as an effective method to perform dimension reduction . Following this line of research , Dosovitskiy et al . ( 2014 ) proposed to perform instance discrimination by creating surrogate classes for each instance and Wu et al . ( 2018 ) further proposed to preserve a memory bank as a dictionary of negative samples . Other extensions based on this memory bank approach include He et al . ( 2020 ) ; Misra & Maaten ( 2020 ) ; Tian et al . ( 2020 ) ; Chen et al . ( 2020c ) . Rather than keeping a costly memory bank , another line of works exploits the benefit of mini-batch training where different samples are treated as negative to each other Ye et al . ( 2019 ) ; Chen et al . ( 2020a ) . Moreover , Khosla et al . ( 2020 ) explores the supervised version of contrastive learning where pairs are generated based on label information . Despite its success in practice , theoretical understanding of contrastive learning is still limited . Previous works provide provable guarantees for contrastive learning under conditional independence assumption ( or its variants ) ( Arora et al. , 2019 ; Lee et al. , 2020 ; Tosh et al. , 2021 ; Tsai et al. , 2020 ) . Specifically , they assume the two contrastive views are independent conditioned on the label and show that contrastive learning can provably learn representations beneficial for downstream tasks . In addition to this line of research , Wang & Isola ( 2020 ) ; Graf et al . ( 2021 ) investigated the representation geometry of supervised contrastive loss , and HaoChen et al . ( 2021 ) provided analysis via a novel concept of augmentation graph with a new loss function that performs spectral decomposition on such graph . Moreover , Wen & Li ( 2021 ) considered the representation learning under the sparse coding model and studied the optimization properties on shallow ReLU neural networks . Different from all previous works , which aim to show that contrastive learning can learn useful representation , our paper aim to explain why contrastive learning outperforms other representation learning methods and also shed light on the role of labeled data in contrastive learning framework , which is under-explored in prior works . 2 PRELIMINARIES . Notations In this paper , we use O , Ω , Θ to hide universal constants and we write ak . bk for two sequences of positive numbers { ak } and { bk } if and only if there exists an universal constant C > 0 such that ak < Cbk for any k. We use ‖ · ‖ , ‖ · ‖2 , ‖ · ‖F to represent the ` 2 norm of vectors , spectral norm of matrices and Frobenius norm of matrices respectively . Let Od , r be a set of d × r orthogonal matrices . i.e. , Od , r , { U ∈ Rd×r : U > U = Ir } . We use |A| to denote the cardinality of a set A . For any n ∈ N+ , let [ n ] = { 1 , 2 , · · · , n } . We use ‖ sin Θ ( U1 , U2 ) ‖F to refer to the sine distance between two orthogonal matrices U1 , U2 ∈ Od , r , which is defined by : ‖sin Θ ( U1 , U2 ) ‖F , ∥∥U > 1⊥U2∥∥F . More properties of sine distance can be found in Section A.1 . We use { ei } di=1 to denote the canonical basis in d-dimensional Euclidean space Rd , that is , ei is the vector whose i-th coordinate is 1 and all the other coordinates are 0 . Let I { A } be an indicator function that takes 1 when A is true , otherwise takes 0 . We write a∨ b and a∧ b to denote max ( a , b ) and min ( a , b ) , respectively . 2.1 SETUP . Given an input x ∈ Rd , contrastive learning aims to learn a low dimensional representation h = f ( x ; θ ) ∈ Rr by contrasting different samples , i.e. , maximizing the agreement between positive pairs , and minimizing the agreement between negative pairs . Suppose we have n data points X = [ x1 , x2 , · · · , xn ] ∈ Rd×n from the population distribution D. The contrastive learning task can be formulated to be an optimization problem : min θ L ( θ ) = min θ 1 n n∑ i=1 ` ( xi , BPosi , B Neg i ; f ( · , θ ) ) + λR ( θ ) , ( 2.1 ) where ` ( · ) is a contrastive loss and λR ( θ ) is a regularization term ; BPosi , B Neg i are the sets of positive samples and negative samples corresponding to xi , which we will describe in detail below . Losses and Models . We then present the model setup considered in this paper . ( a ) . Linear representation and regularization term . We consider the linear representation function f ( x , W ) = Wx , where the parameter θ is a matrix W ∈ Rr×d . Since regularization techniques have been widely adapted in contrastive learning practice ( Chen et al. , 2020a ; He et al. , 2020 ; Grill et al. , 2020 ) , we further consider penalizing the representation by a regularization term R ( W ) = ‖WW > ‖2F /2 to encourage the orthogonality of W and therefore promote the diversity of wi to learn different representations . ( b ) . Triplet Contrastive loss . The contrastive loss is set to be the average similarity between positive pairs minus that between negative pairs : ` ( x , BPos , BNeg , f ( · , θ ) ) = − ∑ xPos∈BPos 〈f ( x , θ ) , f ( xPos , θ ) 〉 |BPos| + ∑ xNeg∈BNeg 〈f ( x , θ ) , f ( xNeg , θ ) 〉 |BNeg| , ( 2.2 ) where BPos , Bneg are sets of positive samples and negative samples corresponding to x . This loss has been commonly used in constrastive learning ( Hadsell et al. , 2006 ) and metric learning ( Schroff et al. , 2015 ; He et al. , 2018 ) . In Khosla et al . ( 2020 ) , the authors show that it is an approximation of the NT-Xent contrastive loss , which has been highlighted in recent contrastive learning practice ( Sohn , 2016 ; Wu et al. , 2018 ; Oord et al. , 2018 ; Chen et al. , 2020a ) . ( c ) . Generation of positive and negative pairs . There are two common approaches to generate such pairs , depending on whether or not label information is available . When the label information is not available , the typical strategy is to generate different views of the original data via augmentation ( Hadsell et al. , 2006 ; Chen et al. , 2020a ) . Two views of the same data point serve as positive pair for each other while those of different data serve as negative pairs . Definition 2.1 ( Augmented pairs generation ) . Given two augmentation functions g1 , g2 : Rd → Rd and n training samples B = { xi } i∈ [ n ] , the augmented views are given by : { ( g1 ( xi ) , g2 ( xi ) ) } i∈ [ n ] . Then for each view gv ( xi ) , v = 1 , 2 , the corresponding positive samples and negative samples are defined by : BPosi , v = { gs ( xi ) : s ∈ [ 2 ] \ { v } } and B Neg i , v = { gs ( xj ) : s ∈ [ 2 ] , j ∈ [ n ] \ { i } } . The loss function of self-supervised contrastive learning problem can be written as : LSelfCon ( W ) = − 1 2n n∑ i=1 2∑ v=1 [ 〈Wgv ( xi ) , Wg [ 2 ] \ { v } ( xi ) 〉− ∑ j 6=i 2∑ s=1 〈Wgv ( xi ) , Wgs ( xj ) 〉 2n− 2 ] + λ 2 ‖WW > ‖2F . ( 2.3 ) In particular , we adopt the following augmentation in our analysis . Definition 2.2 ( Random masking augmentation ) . The two views of the original data are generated by randomly dividing its dimensions to two sets , that is , g1 ( xi ) = Axi , and g2 ( xi ) = ( I − A ) xi , where A = diag ( a1 , · · · , ad ) ∈ Rd×d is the diagonal masking matrix with { ai } di=1 being i.i.d . random variables sampled from a Bernoulli distribution with mean 1/2 . A similar augmentation was considered in Wen & Li ( 2021 ) . However , our primary interest lies in comparing the performance of contrastive learning against autoencoders and analyzing the role of labeled data , while their work focuses on understanding the training process of neural networks in contrastive learning . When the label information is available , Khosla et al . ( 2020 ) proposed the following approach to generate pairs . Definition 2.3 ( Supervised pairs generation ) . In a K-class classification problem , given nk samples for each class k ∈ [ K ] : { xki : i ∈ [ nk ] } Kk=1 and let n = ∑K k=1 nk , the corresponding positive samples and negative samples for xki are defined by BPosi , k = { xkj : j ∈ [ nk ] \ i } and B Neg i , k = { xsj : s ∈ [ K ] \ k , j ∈ [ ns ] } . That is , the positive samples are the remaining ones in the same class with xki and the negative samples are the samples from different classes . Correspondingly , the loss function of the supervised contrastive learning problem can be written as : LSupCon ( W ) = − 1 nK K∑ k=1 n∑ i=1 [ ∑ j 6=i 〈Wxki , Wxkj 〉 n− 1 − n∑ j=1 ∑ s6=k 〈Wxki , Wxsj〉 n ( K − 1 ) ] + λ 2 ‖WW > ‖2F . ( 2.4 ) ( d ) . Spiked Covariance Model . We consider the following spiked covariance model ( Bai & Yao , 2012 ; Yao et al. , 2015 ; Zhang et al. , 2018 ) to study the power of contrastive learning : x = U ? z + ξ , Cov ( z ) = ν2Ir , Cov ( ξ ) = Σ , ( 2.5 ) where z ∈ Rr and ξ ∈ Rd are both zero mean sub-Gaussian random variables . In particular , U ? ∈ Od , r and Σ = diag ( σ21 , · · · , σ2d ) . The first term U ? z represents the signal of interest residing in a low-dimensional subspace spanned by the columns of U ? . The second term ξ is the dense noise with heteroskedastic noise . Given that , the ideal low-dimensional representation is to compress the observed x into a low-dimensional representation spanned by the columns of U ? . In this paper , we aim to learn a good projection W ∈ Rr×d onto a lower-dimensional subspace from observation x . Since the information of W is invariant with the transformation W ← OW for any invertible matrix O ∈ Rr , r , the essential information of W is contained in the right eigenvector of W . Thus we quantify the goodness of the representation W by the sine distance ‖ sin Θ ( U , U ? ) ‖F , where U is the top-r right eigenspace of W . | This paper performs a theoretical study of various empirical phenomena regarding contrastive learning in a simplified setting, with a linear representation function, spiked covariance data model and linearized version of contrastive loss. In such a setting, the following results are shown theoretically (a) contrastive learning with a particular data augmentation can learn much better feature than an autoencoder by virtue of learning the underlying low rank signal as features, (b) supervised contrastive learning can do better than contrastive learning by getting rid some bias that data augmentation introduces, (c) for transfer learning, a combination of unlabeled contrastive loss and supervised learning can do better than either of them individually. Simulation experiments are used to verify many of these findings. | SP:fe8fd7951434a33d616866d3b025534970341327 |
The Power of Contrast for Feature Learning: A Theoretical Analysis | 1 INTRODUCTION . Deep supervised learning has achieved great success in various applications , including computer vision ( Krizhevsky et al. , 2012 ) , natural language processing ( Devlin et al. , 2018 ) , and scientific computing ( Han et al. , 2018 ) . However , its dependence on manually assigned labels , which is usually difficult and costly , has motivated research into alternative approaches to exploit unlabeled data . Self-supervised learning is a promising approach that leverages the unlabeled data itself as supervision and learns representations that are beneficial to potential downstream tasks . At a high level , there are two common approaches for feature extraction in self-supervised learning : generative and contrastive ( Liu et al. , 2021 ) . Both approaches aim to learn latent representations of the original data , while the difference is that the generative approach focused on minimizing the reconstruction error from latent representations , and the contrastive approach targets to decrease the similarity between the representations of contrastive pairs . Recent works have shown the benefits of contrastive learning in practice ( Chen et al. , 2020a ; He et al. , 2020 ; Chen et al. , 2020b ; c ) . However , why the contrastive approach outperforms the generative approach remains mysterious . Additionally , recent works aim to further improve contrastive learning by introducing the label information . Specifically , Khosla et al . ( 2020 ) proposed the supervised contrastive learning , where the contrasting procedures are performed across different classes rather than different instances . With the help of label information , their proposed method outperforms self-supervised contrastive learning and classical cross entropy based supervised learning . However , despite this improvement on in-domain downstream tasks , Islam et al . ( 2021 ) found that such improvement in transfer learning is limited and even negative for such supervised contrastive learning . This phenomenon motivates us to rethink the role of labeled data in the contrastive learning framework . In this paper , we first compare contrastive learning with a representative method in the generative approach – the autoencoders . Specifically , we initialize the investigation in the linear representation setting , which has been widely adopted in theory to shed light upon complex machine learning phenomena such as in Du et al . ( 2020 ) ; Tripuraneni et al . ( 2021 ) . We provide a theoretical analysis of their feature learning performances on the spiked covariance model ( Bai & Yao , 2012 ; Yao et al. , 2015 ; Zhang et al. , 2018 ) and theoretically justify why contrastive learning outperforms autoencoders—contrastive learning is able to remove more noises by constructing contrastive samples . Then we investigate the role of label information in the contrastive learning framework and provide a theoretical justification of why labeled data help to gain accuracy in same-domain classification while can hurt multi-task transfer learning . Related works The idea of contrastive learning was first proposed in Hadsell et al . ( 2006 ) as an effective method to perform dimension reduction . Following this line of research , Dosovitskiy et al . ( 2014 ) proposed to perform instance discrimination by creating surrogate classes for each instance and Wu et al . ( 2018 ) further proposed to preserve a memory bank as a dictionary of negative samples . Other extensions based on this memory bank approach include He et al . ( 2020 ) ; Misra & Maaten ( 2020 ) ; Tian et al . ( 2020 ) ; Chen et al . ( 2020c ) . Rather than keeping a costly memory bank , another line of works exploits the benefit of mini-batch training where different samples are treated as negative to each other Ye et al . ( 2019 ) ; Chen et al . ( 2020a ) . Moreover , Khosla et al . ( 2020 ) explores the supervised version of contrastive learning where pairs are generated based on label information . Despite its success in practice , theoretical understanding of contrastive learning is still limited . Previous works provide provable guarantees for contrastive learning under conditional independence assumption ( or its variants ) ( Arora et al. , 2019 ; Lee et al. , 2020 ; Tosh et al. , 2021 ; Tsai et al. , 2020 ) . Specifically , they assume the two contrastive views are independent conditioned on the label and show that contrastive learning can provably learn representations beneficial for downstream tasks . In addition to this line of research , Wang & Isola ( 2020 ) ; Graf et al . ( 2021 ) investigated the representation geometry of supervised contrastive loss , and HaoChen et al . ( 2021 ) provided analysis via a novel concept of augmentation graph with a new loss function that performs spectral decomposition on such graph . Moreover , Wen & Li ( 2021 ) considered the representation learning under the sparse coding model and studied the optimization properties on shallow ReLU neural networks . Different from all previous works , which aim to show that contrastive learning can learn useful representation , our paper aim to explain why contrastive learning outperforms other representation learning methods and also shed light on the role of labeled data in contrastive learning framework , which is under-explored in prior works . 2 PRELIMINARIES . Notations In this paper , we use O , Ω , Θ to hide universal constants and we write ak . bk for two sequences of positive numbers { ak } and { bk } if and only if there exists an universal constant C > 0 such that ak < Cbk for any k. We use ‖ · ‖ , ‖ · ‖2 , ‖ · ‖F to represent the ` 2 norm of vectors , spectral norm of matrices and Frobenius norm of matrices respectively . Let Od , r be a set of d × r orthogonal matrices . i.e. , Od , r , { U ∈ Rd×r : U > U = Ir } . We use |A| to denote the cardinality of a set A . For any n ∈ N+ , let [ n ] = { 1 , 2 , · · · , n } . We use ‖ sin Θ ( U1 , U2 ) ‖F to refer to the sine distance between two orthogonal matrices U1 , U2 ∈ Od , r , which is defined by : ‖sin Θ ( U1 , U2 ) ‖F , ∥∥U > 1⊥U2∥∥F . More properties of sine distance can be found in Section A.1 . We use { ei } di=1 to denote the canonical basis in d-dimensional Euclidean space Rd , that is , ei is the vector whose i-th coordinate is 1 and all the other coordinates are 0 . Let I { A } be an indicator function that takes 1 when A is true , otherwise takes 0 . We write a∨ b and a∧ b to denote max ( a , b ) and min ( a , b ) , respectively . 2.1 SETUP . Given an input x ∈ Rd , contrastive learning aims to learn a low dimensional representation h = f ( x ; θ ) ∈ Rr by contrasting different samples , i.e. , maximizing the agreement between positive pairs , and minimizing the agreement between negative pairs . Suppose we have n data points X = [ x1 , x2 , · · · , xn ] ∈ Rd×n from the population distribution D. The contrastive learning task can be formulated to be an optimization problem : min θ L ( θ ) = min θ 1 n n∑ i=1 ` ( xi , BPosi , B Neg i ; f ( · , θ ) ) + λR ( θ ) , ( 2.1 ) where ` ( · ) is a contrastive loss and λR ( θ ) is a regularization term ; BPosi , B Neg i are the sets of positive samples and negative samples corresponding to xi , which we will describe in detail below . Losses and Models . We then present the model setup considered in this paper . ( a ) . Linear representation and regularization term . We consider the linear representation function f ( x , W ) = Wx , where the parameter θ is a matrix W ∈ Rr×d . Since regularization techniques have been widely adapted in contrastive learning practice ( Chen et al. , 2020a ; He et al. , 2020 ; Grill et al. , 2020 ) , we further consider penalizing the representation by a regularization term R ( W ) = ‖WW > ‖2F /2 to encourage the orthogonality of W and therefore promote the diversity of wi to learn different representations . ( b ) . Triplet Contrastive loss . The contrastive loss is set to be the average similarity between positive pairs minus that between negative pairs : ` ( x , BPos , BNeg , f ( · , θ ) ) = − ∑ xPos∈BPos 〈f ( x , θ ) , f ( xPos , θ ) 〉 |BPos| + ∑ xNeg∈BNeg 〈f ( x , θ ) , f ( xNeg , θ ) 〉 |BNeg| , ( 2.2 ) where BPos , Bneg are sets of positive samples and negative samples corresponding to x . This loss has been commonly used in constrastive learning ( Hadsell et al. , 2006 ) and metric learning ( Schroff et al. , 2015 ; He et al. , 2018 ) . In Khosla et al . ( 2020 ) , the authors show that it is an approximation of the NT-Xent contrastive loss , which has been highlighted in recent contrastive learning practice ( Sohn , 2016 ; Wu et al. , 2018 ; Oord et al. , 2018 ; Chen et al. , 2020a ) . ( c ) . Generation of positive and negative pairs . There are two common approaches to generate such pairs , depending on whether or not label information is available . When the label information is not available , the typical strategy is to generate different views of the original data via augmentation ( Hadsell et al. , 2006 ; Chen et al. , 2020a ) . Two views of the same data point serve as positive pair for each other while those of different data serve as negative pairs . Definition 2.1 ( Augmented pairs generation ) . Given two augmentation functions g1 , g2 : Rd → Rd and n training samples B = { xi } i∈ [ n ] , the augmented views are given by : { ( g1 ( xi ) , g2 ( xi ) ) } i∈ [ n ] . Then for each view gv ( xi ) , v = 1 , 2 , the corresponding positive samples and negative samples are defined by : BPosi , v = { gs ( xi ) : s ∈ [ 2 ] \ { v } } and B Neg i , v = { gs ( xj ) : s ∈ [ 2 ] , j ∈ [ n ] \ { i } } . The loss function of self-supervised contrastive learning problem can be written as : LSelfCon ( W ) = − 1 2n n∑ i=1 2∑ v=1 [ 〈Wgv ( xi ) , Wg [ 2 ] \ { v } ( xi ) 〉− ∑ j 6=i 2∑ s=1 〈Wgv ( xi ) , Wgs ( xj ) 〉 2n− 2 ] + λ 2 ‖WW > ‖2F . ( 2.3 ) In particular , we adopt the following augmentation in our analysis . Definition 2.2 ( Random masking augmentation ) . The two views of the original data are generated by randomly dividing its dimensions to two sets , that is , g1 ( xi ) = Axi , and g2 ( xi ) = ( I − A ) xi , where A = diag ( a1 , · · · , ad ) ∈ Rd×d is the diagonal masking matrix with { ai } di=1 being i.i.d . random variables sampled from a Bernoulli distribution with mean 1/2 . A similar augmentation was considered in Wen & Li ( 2021 ) . However , our primary interest lies in comparing the performance of contrastive learning against autoencoders and analyzing the role of labeled data , while their work focuses on understanding the training process of neural networks in contrastive learning . When the label information is available , Khosla et al . ( 2020 ) proposed the following approach to generate pairs . Definition 2.3 ( Supervised pairs generation ) . In a K-class classification problem , given nk samples for each class k ∈ [ K ] : { xki : i ∈ [ nk ] } Kk=1 and let n = ∑K k=1 nk , the corresponding positive samples and negative samples for xki are defined by BPosi , k = { xkj : j ∈ [ nk ] \ i } and B Neg i , k = { xsj : s ∈ [ K ] \ k , j ∈ [ ns ] } . That is , the positive samples are the remaining ones in the same class with xki and the negative samples are the samples from different classes . Correspondingly , the loss function of the supervised contrastive learning problem can be written as : LSupCon ( W ) = − 1 nK K∑ k=1 n∑ i=1 [ ∑ j 6=i 〈Wxki , Wxkj 〉 n− 1 − n∑ j=1 ∑ s6=k 〈Wxki , Wxsj〉 n ( K − 1 ) ] + λ 2 ‖WW > ‖2F . ( 2.4 ) ( d ) . Spiked Covariance Model . We consider the following spiked covariance model ( Bai & Yao , 2012 ; Yao et al. , 2015 ; Zhang et al. , 2018 ) to study the power of contrastive learning : x = U ? z + ξ , Cov ( z ) = ν2Ir , Cov ( ξ ) = Σ , ( 2.5 ) where z ∈ Rr and ξ ∈ Rd are both zero mean sub-Gaussian random variables . In particular , U ? ∈ Od , r and Σ = diag ( σ21 , · · · , σ2d ) . The first term U ? z represents the signal of interest residing in a low-dimensional subspace spanned by the columns of U ? . The second term ξ is the dense noise with heteroskedastic noise . Given that , the ideal low-dimensional representation is to compress the observed x into a low-dimensional representation spanned by the columns of U ? . In this paper , we aim to learn a good projection W ∈ Rr×d onto a lower-dimensional subspace from observation x . Since the information of W is invariant with the transformation W ← OW for any invertible matrix O ∈ Rr , r , the essential information of W is contained in the right eigenvector of W . Thus we quantify the goodness of the representation W by the sine distance ‖ sin Θ ( U , U ? ) ‖F , where U is the top-r right eigenspace of W . | This paper presents theoretical results to explain the effectiveness of contrastive learning (CL). For the self-supervised model, the authors prove new sin-distance-based error bounds for autoencoder (AE) and CL. For the fully-supervised model, the generalization error bound is derived. Numerical experiments validate the theoretical finds. | SP:fe8fd7951434a33d616866d3b025534970341327 |
Synchromesh: Reliable Code Generation from Pre-trained Language Models | 1 INTRODUCTION . Large language models ( LLMs ) trained on massive corpora of unsupervised data have been shown to perform a wide range of tasks , including natural language generation , semantic parsing and sentiment analysis ( Brown et al. , 2020 ; Devlin et al. , 2019 ; Raffel et al. , 2020 ) . This can be achieved without task-specific training , but rather by adapting the model to each task at test-time using textual prompts , which can contain examples and natural language descriptions . In many cases , this methodology was shown to provide good performance , reducing the need to annotate large datasets for each task of interest ( Brown et al. , 2020 ; Shin et al. , 2021 ) . An important application of LLMs is in synthesizing programs from natural language descriptions ( Austin et al. , 2021 ; Chen et al. , 2021 ) . But this task is still challenging for LLMs . First , they can commit conceptual errors , generating code that misses the intent behind the given description . For example , when asked to reverse an array , the model might generate code that simply swaps the first and last elements . Indeed , users of current natural language-to-code systems report that models often produce code that is unrelated to their query ( Xu et al. , 2021 ) . Even when they capture the right intent , LLMs can still make implementation errors : the generated code can fail to execute . For reversing an array , a model might generate a loop with the correct structure but with an off-by-one error , causing a runtime exception . These errors are common even with very large models . For example , Austin et al . ( 2021 ) tested models with up to 137B parameters on generating short Python programs from natural language . Still , 47 % of the failures were due to syntax , typing or run-time errors ( as opposed to running but producing incorrect output ) . This is in line with theoretical results in Merrill et al . ( 2021 ) showing that programming language semantics can not be fully inferred from ungrounded data . Together , both observations suggest that simply scaling up LLMs might be ineffective to obtain reliable performance , especially for longer programs . In this paper , we address both conceptual and implementation errors with SYNCHROMESH , a framework for reliable code generation from pre-trained models . Since LLMs are highly sensitive to which few-shot examples are given in their prompt , we propose Target Similarity Tuning ( TST ) : a method for dynamically selecting semantically relevant examples for a given description . TST mitigates conceptual errors by learning to select examples with similar intent , even when their natural language descriptions seem unrelated in form . Given relevant examples , we then generate programs with Constrained Semantic Decoding ( CSD ) , a novel method for enforcing rich syntactic and semantic constraints during code generation on top of a frozen language model . Rich language-specific constraints , ranging from syntax validity to scoping and type-checking , can be implemented under the simple abstraction of completion engines ( CE ) . CSD aligns these constraints with the language model ’ s token vocabulary by leveraging Brzozowski language derivatives ( Brzozowski , 1964 ) . This guarantees that all sampled programs satisfies the implemented constraints , preventing whole classes of implementation errors by construction . The pipeline is illustrated in Figure 1 . We demonstrate the generality of SYNCHROMESH in three real-world languages : SQL ( database queries ) , Vega-Lite ( data visualization ) and SMCalFlow ( calendar applications ) . In experiments with GPT-3 and Codex , we observe that SYNCHROMESH can eliminate whole classes of errors that make outputs from unconstrained models either fail to execute or produce trivial results ( e.g. , empty charts ) . Furthermore , eliminating invalid programs consistently improves prediction accuracy . In summary , we make the following contributions : • We propose Target Similarity Tuning for selecting few-shot examples based on the similarity of the programs they describe , improving relevance and downstream performance . • We introduce completion engines as an abstraction that can implement rich classes of semantic program constraints , as we demonstrate in SQL , Vega-Lite and SMCalFlow . • We introduce a general , constraint-observing decoding algorithm , which aligns programming language constraints with the language model ’ s token vocabulary . • We evaluate our method in three natural language-to-code tasks . CSD and TST both show strong complementary gains in output validity and prediction accuracy across domains . 2 TARGET SIMILARITY TUNING . In this section , we first overview the challenge posed by conceptual errors in programs synthesized by LLMs . We then introduce TST , which improves performance through more relevant example selection . Throughout , we will use a real example of synthesizing a SQL database query to answer a question posed in natural language . Suppose a data analyst has a relational database of airports and wants to answer the following question : “ Which city has the highest number of airports ? ” One procedure for turning this description into a SQL query is to use an LLM such as GPT-3 ( Brown et al. , 2020 ) or Codex ( Chen et al. , 2021 ) . To prompt the model for the task at hand , we would feed it with a natural language description of the task and a selection of input-output examples . Given the analyst ’ s question , how do we select the most relevant examples from a training pool ? Liu et al . ( 2021a ) proposed to retrieve examples with similar natural language descriptions using a pre-trained paraphrase detection model . Figure 2a shows the most similar example from the Spider natural language-to-SQL dataset ( Yu et al. , 2018 ) according to Sentence-BERT ( Reimers & Gurevych , 2019 ) . The query “ Which city has the highest elevation ? ” is similar on a surface level : it also asks “ Which city has the highest ? ” . This training query asks about “ elevation ” , a property that is readily available as a column in the Airports table . Figure 2b shows GPT-3 ’ s output when given this and a few other examples . The model attempts to mimic the top example , referring to a nonexistent column “ NumberOfAirports ” . The issue is that we picked the example in the prompt based on description similarity and not SQL query similarity . In fact , the SQL query in the chosen example had a simplistic structure that was significantly different from the structure of the desired SQL query , and this contributed to the failure at Point ( b ) in Figure 2 . We want to retrieve examples that have relevant program structures for the test query . We do so using our fine-tuning scheme called Target Similarity Tuning ( TST ) . Formally , suppose D is a dataset of programs and associated utterances , with Di = ( pi , ui ) . Let S ( pa , pb ) ∈ [ 0 , 1 ] denote a normalized similarity metric between programs . If fθ is a pre-trained similarity model for natural language sentences , TST consists in fine-tuning f to predict the similarity between target programs given by S from their descriptions . Precisely , we minimize the mean-squared error loss : LTST ( θ ) : = Ei , j∼D [ fθ ( ui , uj ) − S ( pi , pj ) ] 2 . We define S using the classical tree edit distance algorithm from Zhang & Shasha ( 1989 ) to compare Abstract Syntax Trees ( ASTs ) . Figure 2c shows GPT-3 ’ s output when given examples selected with TST . Now , the output query is correct : it performs a “ group by ” on the “ City ” column , and sorts by the count of records in each group . This structure was already present in the top example selected by TST , corresponding to “ Return the team with the most technicians ” . Even if the analyst ’ s question and this utterance are drastically different in natural language , they share similarity in the SQL query that they describe . The TST objective is able to properly capture this fact . As our experiments show in Section 4 , TST significantly boosts the performance of both GPT-3 and Codex . 3 CONSTRAINED SEMANTIC DECODING . We now present Constrained Semantic Decoding ( CSD ) as an approach to eliminate implementation errors from code generated by LLMs . We first illustrate CSD with an example , and then formalize it using the abstraction of CEs . The example in Figure 2 showed that TST can help LLMs generate the correct program . In general , however , TST only helps LLMs by guiding toward the correct structure , but the model still needs to fill all the specific implementation details correctly . Figure 3 shows a case where the model can not simply adapt one example from the prompt . Here , the user ’ s query is “ Which city has the highest number of departing flights ? ” This query is similar to the previous one – in fact , TST retrieves the same top-1 example as before . But now the correct SQL query needs to join the “ Airports ” and “ Flights ” tables . GPT-3 generates the join condition Flights.AirportCode = Airports.SourceAirport , but this condition has a subtle error : the column names of the two tables are swapped . Thus , this query fails to execute . In general , unconstrained language models often make such implementation errors : using undeclared variables , losing track of nesting levels when producing complex expressions , or calling functions using arguments of the wrong type . Even the smallest of such errors prevents generated code from executing . CSD prevents implementation errors by construction ( as opposed to repairing after-the-fact ) . Imagine we have access to an oracle , which we call a CE , that can take a partial program and return all tokens that can extend that partial program toward a complete correct program . When the LLM is generating the program token by token , CSD ensures that the next token is sampled from the set returned by the CE . In Figure 3 , after generating “ T1. ” inside the “ on ” clause , our SQL CE resolves the alias and constrains the model to output one of the columns from the “ Flights ” table . This fixes the error seen previously during generation and produces the correct SQL query . 3.1 COMPLETION ENGINES . We now formally define CEs . Let Σ be a base alphabet , and ΣL ⊆ Σ∗ be the ( potentially infinite ) set of tokens of the target language . Our goal is to sample programs from a language L ⊆ Σ∗L – the set of valid programs . A CE CL is a partial function from Σ∗L to a set of tokens . We use a regular expression over Σ to represent a set of tokens . The strings in the domain of CL are called completion points , and a CE satisfies the following axioms : ( A1 ) The empty string and every p ∈ L must be completion points . For every p ∈ L , CL ( p ) = r′ $ ′ , where r′ $ ′ is the regular expression that matches the stop token . ( A2 ) If s ∈ Σ∗L is a completion point and t fully matches CL ( s ) , then their concatenation st must also be a completion point . ( A3 ) The CE is exhaustive ; that is , if s is a completion point and s = tt0 , where t0 is a token , then t should be a completion point and CL ( t ) should match t0 . Furthermore , we assume that CEs are only called after maximal matches . For example , if a partial program ends in an identifier , the CE can assume that the identifier is complete . Our CEs are implemented in two layers : a context-free layer , which enforces syntactic validity , and a context-sensitive layer , which encodes semantic constraints that depend on language semantics and the user ’ s context ( e.g. , the database ) . Below , we describe an automatic method for constructing context-free CEs directly from the target language ’ s grammar . The context-sensitive layer of an engine is specific to the target language . Table 1 provides an overview of several constraints implemented by our CEs for SQL , Vega-Lite and SMCalFlow , three rich languages with different syntactic and semantic structures . A detailed description of the three CEs can be found in Appendix C. Deriving completions from grammars Computer language parsers are often automatically generated from a grammar . The grammar contains enough information to derive the context-free layer of CEs . To facilitate this process , we created a library that extends any parser generated by ANTLR ( Parr & Fisher , 2011 ) , a popular LL ( ∗ ) top-down parser generator , to provide token-level completions . Namely , we ( i ) let the ANTLR-generated parser process the given program prefix p , ( ii ) retrieve its state in the Augmented Transition Network ( ATN ) at the last program token , ( iii ) traverse the ATN from that state to enumerate all possible next token productions . This process yields ( a ) a list of productions and token types { τj } Kj=1 that are allowed to follow p and ( b ) a partial AST Tp . Each CE takes { τj } and Tp as input to generate semantic context-sensitive constraints . | This paper presents a framework for more reliable code generation via in-context learning of GPT-3/CodeX. The paper is motivated by the finding that GPT-3/CodeX often generate programs with syntactic and semantic errors. To resolve this problem, the authors propose 1) Target Similarity Tuning (TST) for retrieving 5 relevant examples based on program similarity and 2) Constrained Semantic Decoding (CSD) for constraining the code generation output to a set of valid programs. The authors evaluate their proposed methods on three different code synthesising tasks (Spider, Vega-Lite, and SMCalFlow). Strong complementary gains on these tasks demonstrate the effectiveness and generalizability of the proposed framework on several tasks with different target programs (SQL, Vega-lite, and SMCalFlow). | SP:eb63cbf51fddf87291e7b9f000b11f7eb7830482 |
Synchromesh: Reliable Code Generation from Pre-trained Language Models | 1 INTRODUCTION . Large language models ( LLMs ) trained on massive corpora of unsupervised data have been shown to perform a wide range of tasks , including natural language generation , semantic parsing and sentiment analysis ( Brown et al. , 2020 ; Devlin et al. , 2019 ; Raffel et al. , 2020 ) . This can be achieved without task-specific training , but rather by adapting the model to each task at test-time using textual prompts , which can contain examples and natural language descriptions . In many cases , this methodology was shown to provide good performance , reducing the need to annotate large datasets for each task of interest ( Brown et al. , 2020 ; Shin et al. , 2021 ) . An important application of LLMs is in synthesizing programs from natural language descriptions ( Austin et al. , 2021 ; Chen et al. , 2021 ) . But this task is still challenging for LLMs . First , they can commit conceptual errors , generating code that misses the intent behind the given description . For example , when asked to reverse an array , the model might generate code that simply swaps the first and last elements . Indeed , users of current natural language-to-code systems report that models often produce code that is unrelated to their query ( Xu et al. , 2021 ) . Even when they capture the right intent , LLMs can still make implementation errors : the generated code can fail to execute . For reversing an array , a model might generate a loop with the correct structure but with an off-by-one error , causing a runtime exception . These errors are common even with very large models . For example , Austin et al . ( 2021 ) tested models with up to 137B parameters on generating short Python programs from natural language . Still , 47 % of the failures were due to syntax , typing or run-time errors ( as opposed to running but producing incorrect output ) . This is in line with theoretical results in Merrill et al . ( 2021 ) showing that programming language semantics can not be fully inferred from ungrounded data . Together , both observations suggest that simply scaling up LLMs might be ineffective to obtain reliable performance , especially for longer programs . In this paper , we address both conceptual and implementation errors with SYNCHROMESH , a framework for reliable code generation from pre-trained models . Since LLMs are highly sensitive to which few-shot examples are given in their prompt , we propose Target Similarity Tuning ( TST ) : a method for dynamically selecting semantically relevant examples for a given description . TST mitigates conceptual errors by learning to select examples with similar intent , even when their natural language descriptions seem unrelated in form . Given relevant examples , we then generate programs with Constrained Semantic Decoding ( CSD ) , a novel method for enforcing rich syntactic and semantic constraints during code generation on top of a frozen language model . Rich language-specific constraints , ranging from syntax validity to scoping and type-checking , can be implemented under the simple abstraction of completion engines ( CE ) . CSD aligns these constraints with the language model ’ s token vocabulary by leveraging Brzozowski language derivatives ( Brzozowski , 1964 ) . This guarantees that all sampled programs satisfies the implemented constraints , preventing whole classes of implementation errors by construction . The pipeline is illustrated in Figure 1 . We demonstrate the generality of SYNCHROMESH in three real-world languages : SQL ( database queries ) , Vega-Lite ( data visualization ) and SMCalFlow ( calendar applications ) . In experiments with GPT-3 and Codex , we observe that SYNCHROMESH can eliminate whole classes of errors that make outputs from unconstrained models either fail to execute or produce trivial results ( e.g. , empty charts ) . Furthermore , eliminating invalid programs consistently improves prediction accuracy . In summary , we make the following contributions : • We propose Target Similarity Tuning for selecting few-shot examples based on the similarity of the programs they describe , improving relevance and downstream performance . • We introduce completion engines as an abstraction that can implement rich classes of semantic program constraints , as we demonstrate in SQL , Vega-Lite and SMCalFlow . • We introduce a general , constraint-observing decoding algorithm , which aligns programming language constraints with the language model ’ s token vocabulary . • We evaluate our method in three natural language-to-code tasks . CSD and TST both show strong complementary gains in output validity and prediction accuracy across domains . 2 TARGET SIMILARITY TUNING . In this section , we first overview the challenge posed by conceptual errors in programs synthesized by LLMs . We then introduce TST , which improves performance through more relevant example selection . Throughout , we will use a real example of synthesizing a SQL database query to answer a question posed in natural language . Suppose a data analyst has a relational database of airports and wants to answer the following question : “ Which city has the highest number of airports ? ” One procedure for turning this description into a SQL query is to use an LLM such as GPT-3 ( Brown et al. , 2020 ) or Codex ( Chen et al. , 2021 ) . To prompt the model for the task at hand , we would feed it with a natural language description of the task and a selection of input-output examples . Given the analyst ’ s question , how do we select the most relevant examples from a training pool ? Liu et al . ( 2021a ) proposed to retrieve examples with similar natural language descriptions using a pre-trained paraphrase detection model . Figure 2a shows the most similar example from the Spider natural language-to-SQL dataset ( Yu et al. , 2018 ) according to Sentence-BERT ( Reimers & Gurevych , 2019 ) . The query “ Which city has the highest elevation ? ” is similar on a surface level : it also asks “ Which city has the highest ? ” . This training query asks about “ elevation ” , a property that is readily available as a column in the Airports table . Figure 2b shows GPT-3 ’ s output when given this and a few other examples . The model attempts to mimic the top example , referring to a nonexistent column “ NumberOfAirports ” . The issue is that we picked the example in the prompt based on description similarity and not SQL query similarity . In fact , the SQL query in the chosen example had a simplistic structure that was significantly different from the structure of the desired SQL query , and this contributed to the failure at Point ( b ) in Figure 2 . We want to retrieve examples that have relevant program structures for the test query . We do so using our fine-tuning scheme called Target Similarity Tuning ( TST ) . Formally , suppose D is a dataset of programs and associated utterances , with Di = ( pi , ui ) . Let S ( pa , pb ) ∈ [ 0 , 1 ] denote a normalized similarity metric between programs . If fθ is a pre-trained similarity model for natural language sentences , TST consists in fine-tuning f to predict the similarity between target programs given by S from their descriptions . Precisely , we minimize the mean-squared error loss : LTST ( θ ) : = Ei , j∼D [ fθ ( ui , uj ) − S ( pi , pj ) ] 2 . We define S using the classical tree edit distance algorithm from Zhang & Shasha ( 1989 ) to compare Abstract Syntax Trees ( ASTs ) . Figure 2c shows GPT-3 ’ s output when given examples selected with TST . Now , the output query is correct : it performs a “ group by ” on the “ City ” column , and sorts by the count of records in each group . This structure was already present in the top example selected by TST , corresponding to “ Return the team with the most technicians ” . Even if the analyst ’ s question and this utterance are drastically different in natural language , they share similarity in the SQL query that they describe . The TST objective is able to properly capture this fact . As our experiments show in Section 4 , TST significantly boosts the performance of both GPT-3 and Codex . 3 CONSTRAINED SEMANTIC DECODING . We now present Constrained Semantic Decoding ( CSD ) as an approach to eliminate implementation errors from code generated by LLMs . We first illustrate CSD with an example , and then formalize it using the abstraction of CEs . The example in Figure 2 showed that TST can help LLMs generate the correct program . In general , however , TST only helps LLMs by guiding toward the correct structure , but the model still needs to fill all the specific implementation details correctly . Figure 3 shows a case where the model can not simply adapt one example from the prompt . Here , the user ’ s query is “ Which city has the highest number of departing flights ? ” This query is similar to the previous one – in fact , TST retrieves the same top-1 example as before . But now the correct SQL query needs to join the “ Airports ” and “ Flights ” tables . GPT-3 generates the join condition Flights.AirportCode = Airports.SourceAirport , but this condition has a subtle error : the column names of the two tables are swapped . Thus , this query fails to execute . In general , unconstrained language models often make such implementation errors : using undeclared variables , losing track of nesting levels when producing complex expressions , or calling functions using arguments of the wrong type . Even the smallest of such errors prevents generated code from executing . CSD prevents implementation errors by construction ( as opposed to repairing after-the-fact ) . Imagine we have access to an oracle , which we call a CE , that can take a partial program and return all tokens that can extend that partial program toward a complete correct program . When the LLM is generating the program token by token , CSD ensures that the next token is sampled from the set returned by the CE . In Figure 3 , after generating “ T1. ” inside the “ on ” clause , our SQL CE resolves the alias and constrains the model to output one of the columns from the “ Flights ” table . This fixes the error seen previously during generation and produces the correct SQL query . 3.1 COMPLETION ENGINES . We now formally define CEs . Let Σ be a base alphabet , and ΣL ⊆ Σ∗ be the ( potentially infinite ) set of tokens of the target language . Our goal is to sample programs from a language L ⊆ Σ∗L – the set of valid programs . A CE CL is a partial function from Σ∗L to a set of tokens . We use a regular expression over Σ to represent a set of tokens . The strings in the domain of CL are called completion points , and a CE satisfies the following axioms : ( A1 ) The empty string and every p ∈ L must be completion points . For every p ∈ L , CL ( p ) = r′ $ ′ , where r′ $ ′ is the regular expression that matches the stop token . ( A2 ) If s ∈ Σ∗L is a completion point and t fully matches CL ( s ) , then their concatenation st must also be a completion point . ( A3 ) The CE is exhaustive ; that is , if s is a completion point and s = tt0 , where t0 is a token , then t should be a completion point and CL ( t ) should match t0 . Furthermore , we assume that CEs are only called after maximal matches . For example , if a partial program ends in an identifier , the CE can assume that the identifier is complete . Our CEs are implemented in two layers : a context-free layer , which enforces syntactic validity , and a context-sensitive layer , which encodes semantic constraints that depend on language semantics and the user ’ s context ( e.g. , the database ) . Below , we describe an automatic method for constructing context-free CEs directly from the target language ’ s grammar . The context-sensitive layer of an engine is specific to the target language . Table 1 provides an overview of several constraints implemented by our CEs for SQL , Vega-Lite and SMCalFlow , three rich languages with different syntactic and semantic structures . A detailed description of the three CEs can be found in Appendix C. Deriving completions from grammars Computer language parsers are often automatically generated from a grammar . The grammar contains enough information to derive the context-free layer of CEs . To facilitate this process , we created a library that extends any parser generated by ANTLR ( Parr & Fisher , 2011 ) , a popular LL ( ∗ ) top-down parser generator , to provide token-level completions . Namely , we ( i ) let the ANTLR-generated parser process the given program prefix p , ( ii ) retrieve its state in the Augmented Transition Network ( ATN ) at the last program token , ( iii ) traverse the ATN from that state to enumerate all possible next token productions . This process yields ( a ) a list of productions and token types { τj } Kj=1 that are allowed to follow p and ( b ) a partial AST Tp . Each CE takes { τj } and Tp as input to generate semantic context-sensitive constraints . | The paper considers the problem of text to code translation, usually done in state-of-the-art models using a natural language input fed into a transformer model together with other similar input/output examples as a few-shot learning and code being directly output by the model. The paper proposes two enhancements of this process. * The first one called Target Similarity Tuning (TST) is a way to pick input/output examples similar to the input text. Previous works use a pretrained models for natural language similarity (Sentence-BERT is compared in the paper) that determine what examples to include in the few-shot prompt and the TST proposes to also fine-tune this model on code examples such that some semantic similarity in encoded - for example using the same structure of the queries [think of SQL queries to synthesize]. * The other improvement called Constrained Semantic Decoding (CSD) proposes to alter the decoder of the transformer model and to make it avoid generating programs that are impossible to complete to syntactically or semantically correct ones. | SP:eb63cbf51fddf87291e7b9f000b11f7eb7830482 |
Synchromesh: Reliable Code Generation from Pre-trained Language Models | 1 INTRODUCTION . Large language models ( LLMs ) trained on massive corpora of unsupervised data have been shown to perform a wide range of tasks , including natural language generation , semantic parsing and sentiment analysis ( Brown et al. , 2020 ; Devlin et al. , 2019 ; Raffel et al. , 2020 ) . This can be achieved without task-specific training , but rather by adapting the model to each task at test-time using textual prompts , which can contain examples and natural language descriptions . In many cases , this methodology was shown to provide good performance , reducing the need to annotate large datasets for each task of interest ( Brown et al. , 2020 ; Shin et al. , 2021 ) . An important application of LLMs is in synthesizing programs from natural language descriptions ( Austin et al. , 2021 ; Chen et al. , 2021 ) . But this task is still challenging for LLMs . First , they can commit conceptual errors , generating code that misses the intent behind the given description . For example , when asked to reverse an array , the model might generate code that simply swaps the first and last elements . Indeed , users of current natural language-to-code systems report that models often produce code that is unrelated to their query ( Xu et al. , 2021 ) . Even when they capture the right intent , LLMs can still make implementation errors : the generated code can fail to execute . For reversing an array , a model might generate a loop with the correct structure but with an off-by-one error , causing a runtime exception . These errors are common even with very large models . For example , Austin et al . ( 2021 ) tested models with up to 137B parameters on generating short Python programs from natural language . Still , 47 % of the failures were due to syntax , typing or run-time errors ( as opposed to running but producing incorrect output ) . This is in line with theoretical results in Merrill et al . ( 2021 ) showing that programming language semantics can not be fully inferred from ungrounded data . Together , both observations suggest that simply scaling up LLMs might be ineffective to obtain reliable performance , especially for longer programs . In this paper , we address both conceptual and implementation errors with SYNCHROMESH , a framework for reliable code generation from pre-trained models . Since LLMs are highly sensitive to which few-shot examples are given in their prompt , we propose Target Similarity Tuning ( TST ) : a method for dynamically selecting semantically relevant examples for a given description . TST mitigates conceptual errors by learning to select examples with similar intent , even when their natural language descriptions seem unrelated in form . Given relevant examples , we then generate programs with Constrained Semantic Decoding ( CSD ) , a novel method for enforcing rich syntactic and semantic constraints during code generation on top of a frozen language model . Rich language-specific constraints , ranging from syntax validity to scoping and type-checking , can be implemented under the simple abstraction of completion engines ( CE ) . CSD aligns these constraints with the language model ’ s token vocabulary by leveraging Brzozowski language derivatives ( Brzozowski , 1964 ) . This guarantees that all sampled programs satisfies the implemented constraints , preventing whole classes of implementation errors by construction . The pipeline is illustrated in Figure 1 . We demonstrate the generality of SYNCHROMESH in three real-world languages : SQL ( database queries ) , Vega-Lite ( data visualization ) and SMCalFlow ( calendar applications ) . In experiments with GPT-3 and Codex , we observe that SYNCHROMESH can eliminate whole classes of errors that make outputs from unconstrained models either fail to execute or produce trivial results ( e.g. , empty charts ) . Furthermore , eliminating invalid programs consistently improves prediction accuracy . In summary , we make the following contributions : • We propose Target Similarity Tuning for selecting few-shot examples based on the similarity of the programs they describe , improving relevance and downstream performance . • We introduce completion engines as an abstraction that can implement rich classes of semantic program constraints , as we demonstrate in SQL , Vega-Lite and SMCalFlow . • We introduce a general , constraint-observing decoding algorithm , which aligns programming language constraints with the language model ’ s token vocabulary . • We evaluate our method in three natural language-to-code tasks . CSD and TST both show strong complementary gains in output validity and prediction accuracy across domains . 2 TARGET SIMILARITY TUNING . In this section , we first overview the challenge posed by conceptual errors in programs synthesized by LLMs . We then introduce TST , which improves performance through more relevant example selection . Throughout , we will use a real example of synthesizing a SQL database query to answer a question posed in natural language . Suppose a data analyst has a relational database of airports and wants to answer the following question : “ Which city has the highest number of airports ? ” One procedure for turning this description into a SQL query is to use an LLM such as GPT-3 ( Brown et al. , 2020 ) or Codex ( Chen et al. , 2021 ) . To prompt the model for the task at hand , we would feed it with a natural language description of the task and a selection of input-output examples . Given the analyst ’ s question , how do we select the most relevant examples from a training pool ? Liu et al . ( 2021a ) proposed to retrieve examples with similar natural language descriptions using a pre-trained paraphrase detection model . Figure 2a shows the most similar example from the Spider natural language-to-SQL dataset ( Yu et al. , 2018 ) according to Sentence-BERT ( Reimers & Gurevych , 2019 ) . The query “ Which city has the highest elevation ? ” is similar on a surface level : it also asks “ Which city has the highest ? ” . This training query asks about “ elevation ” , a property that is readily available as a column in the Airports table . Figure 2b shows GPT-3 ’ s output when given this and a few other examples . The model attempts to mimic the top example , referring to a nonexistent column “ NumberOfAirports ” . The issue is that we picked the example in the prompt based on description similarity and not SQL query similarity . In fact , the SQL query in the chosen example had a simplistic structure that was significantly different from the structure of the desired SQL query , and this contributed to the failure at Point ( b ) in Figure 2 . We want to retrieve examples that have relevant program structures for the test query . We do so using our fine-tuning scheme called Target Similarity Tuning ( TST ) . Formally , suppose D is a dataset of programs and associated utterances , with Di = ( pi , ui ) . Let S ( pa , pb ) ∈ [ 0 , 1 ] denote a normalized similarity metric between programs . If fθ is a pre-trained similarity model for natural language sentences , TST consists in fine-tuning f to predict the similarity between target programs given by S from their descriptions . Precisely , we minimize the mean-squared error loss : LTST ( θ ) : = Ei , j∼D [ fθ ( ui , uj ) − S ( pi , pj ) ] 2 . We define S using the classical tree edit distance algorithm from Zhang & Shasha ( 1989 ) to compare Abstract Syntax Trees ( ASTs ) . Figure 2c shows GPT-3 ’ s output when given examples selected with TST . Now , the output query is correct : it performs a “ group by ” on the “ City ” column , and sorts by the count of records in each group . This structure was already present in the top example selected by TST , corresponding to “ Return the team with the most technicians ” . Even if the analyst ’ s question and this utterance are drastically different in natural language , they share similarity in the SQL query that they describe . The TST objective is able to properly capture this fact . As our experiments show in Section 4 , TST significantly boosts the performance of both GPT-3 and Codex . 3 CONSTRAINED SEMANTIC DECODING . We now present Constrained Semantic Decoding ( CSD ) as an approach to eliminate implementation errors from code generated by LLMs . We first illustrate CSD with an example , and then formalize it using the abstraction of CEs . The example in Figure 2 showed that TST can help LLMs generate the correct program . In general , however , TST only helps LLMs by guiding toward the correct structure , but the model still needs to fill all the specific implementation details correctly . Figure 3 shows a case where the model can not simply adapt one example from the prompt . Here , the user ’ s query is “ Which city has the highest number of departing flights ? ” This query is similar to the previous one – in fact , TST retrieves the same top-1 example as before . But now the correct SQL query needs to join the “ Airports ” and “ Flights ” tables . GPT-3 generates the join condition Flights.AirportCode = Airports.SourceAirport , but this condition has a subtle error : the column names of the two tables are swapped . Thus , this query fails to execute . In general , unconstrained language models often make such implementation errors : using undeclared variables , losing track of nesting levels when producing complex expressions , or calling functions using arguments of the wrong type . Even the smallest of such errors prevents generated code from executing . CSD prevents implementation errors by construction ( as opposed to repairing after-the-fact ) . Imagine we have access to an oracle , which we call a CE , that can take a partial program and return all tokens that can extend that partial program toward a complete correct program . When the LLM is generating the program token by token , CSD ensures that the next token is sampled from the set returned by the CE . In Figure 3 , after generating “ T1. ” inside the “ on ” clause , our SQL CE resolves the alias and constrains the model to output one of the columns from the “ Flights ” table . This fixes the error seen previously during generation and produces the correct SQL query . 3.1 COMPLETION ENGINES . We now formally define CEs . Let Σ be a base alphabet , and ΣL ⊆ Σ∗ be the ( potentially infinite ) set of tokens of the target language . Our goal is to sample programs from a language L ⊆ Σ∗L – the set of valid programs . A CE CL is a partial function from Σ∗L to a set of tokens . We use a regular expression over Σ to represent a set of tokens . The strings in the domain of CL are called completion points , and a CE satisfies the following axioms : ( A1 ) The empty string and every p ∈ L must be completion points . For every p ∈ L , CL ( p ) = r′ $ ′ , where r′ $ ′ is the regular expression that matches the stop token . ( A2 ) If s ∈ Σ∗L is a completion point and t fully matches CL ( s ) , then their concatenation st must also be a completion point . ( A3 ) The CE is exhaustive ; that is , if s is a completion point and s = tt0 , where t0 is a token , then t should be a completion point and CL ( t ) should match t0 . Furthermore , we assume that CEs are only called after maximal matches . For example , if a partial program ends in an identifier , the CE can assume that the identifier is complete . Our CEs are implemented in two layers : a context-free layer , which enforces syntactic validity , and a context-sensitive layer , which encodes semantic constraints that depend on language semantics and the user ’ s context ( e.g. , the database ) . Below , we describe an automatic method for constructing context-free CEs directly from the target language ’ s grammar . The context-sensitive layer of an engine is specific to the target language . Table 1 provides an overview of several constraints implemented by our CEs for SQL , Vega-Lite and SMCalFlow , three rich languages with different syntactic and semantic structures . A detailed description of the three CEs can be found in Appendix C. Deriving completions from grammars Computer language parsers are often automatically generated from a grammar . The grammar contains enough information to derive the context-free layer of CEs . To facilitate this process , we created a library that extends any parser generated by ANTLR ( Parr & Fisher , 2011 ) , a popular LL ( ∗ ) top-down parser generator , to provide token-level completions . Namely , we ( i ) let the ANTLR-generated parser process the given program prefix p , ( ii ) retrieve its state in the Augmented Transition Network ( ATN ) at the last program token , ( iii ) traverse the ATN from that state to enumerate all possible next token productions . This process yields ( a ) a list of productions and token types { τj } Kj=1 that are allowed to follow p and ( b ) a partial AST Tp . Each CE takes { τj } and Tp as input to generate semantic context-sensitive constraints . | This paper proposes SYNCHROMESH, a framework for improving the reliability of pre-trained models for code generation. SYNCHROMESH first retrieves few-shot examples from a training set using Target Similarity Tuning. It then feeds the examples to a pre-trained language model and samples programs using Constrained Semantic Decoding, which can constraint the output to a set of valid programs in the target language. The authors evaluate the proposed approach by synthesizing code from natural language descriptions using GPT-3 and Codex in three real-world languages: SQL queries, Vega-Lite visualizations and SMCalFlow programs. The experimental results look promising. | SP:eb63cbf51fddf87291e7b9f000b11f7eb7830482 |
Towards Generative Latent Variable Models for Speech | 1 INTRODUCTION . With the introduction of the variational autoencoder ( VAE ) ( Kingma & Welling , 2014 ; Rezende et al. , 2014 ) came two rapid extensions for modeling speech data ( Chung et al. , 2015 ; Fraccaro et al. , 2016 ) . Since then , temporal LVMs have undergone little development and their autoregressive counterparts , such as the WaveNet ( Oord et al. , 2016a ) , remain state-of-the-art . In the image domain , generative LVMs have recently shown superior performance to the PixelCNN ( Oord et al. , 2016c ; b ; Salimans et al. , 2017 ) , the model that built the foundation for WaveNet . The improvements in performance have primarily been driven by altered inference models , including top-down ( Sønderby et al. , 2016 ) and bidirectional inference ( Maaløe et al. , 2019 ) , deeper latent hierarchies and skip connections ( Sønderby et al. , 2016 ; Maaløe et al. , 2019 ; Vahdat & Kautz , 2020 ; Child , 2021 ) . To innovate and compare LVMs we need good baselining , similar to the many reported benchmarks within the image domain . However , research in the speech domain has often omitted reporting a likelihood ( Oord et al. , 2016a ; Hsu et al. , 2017 ; Oord et al. , 2018b ) or has reported likelihoods that are incomparable due to subtle differences in the model parameterizations ( Chung et al. , 2015 ; Fraccaro et al. , 2016 ; Hsu et al. , 2017 ; Aksan & Hilliges , 2019 ) . Without a proper comparison standard , it is difficult for the field of explicit likelihood models on speech to evolve further . This research pushes forward the state of the LVM on speech by ( i ) properly benchmarking previous models , ( ii ) introducing a high-performing hierarchical temporal LVM architecture , and ( iii ) analyzing the representations of the latent variables . We find that : ( I ) Previous state-of-the-art LVMs achieve close to identical likelihood performance , still significantly inferior to the WaveNet . Interestingly , we also find that the WaveNet performs almost identically to a standard LSTM parameterization ( Hochreiter & Schmidhuber , 1997 ) but surprisingly worse than the lossless compression algorithm FLAC . ( II ) Similar to conclusions within image modeling ( Maaløe et al. , 2019 ; Vahdat & Kautz , 2020 ; Child , 2021 ) , the LVM expressiveness increases with a deeper hierarchy of stochastic latent variables . In direct comparisons , the introduced model outperforms its deterministic counterparts . However , due to computational cost , it remains infeasible to run the model on the same setting as a state-of-the-art WaveNet model . p ( xt|x < t ) LSTM p ( xt|z≤t , x < t ) p ( zt|x < t , z < t ) VRNN p ( xt|z≤t , x < t ) p ( zt|x < t , z < t ) SRNN p ( xt|z≤t ) p ( zt|z < t ) CW-VAE ( III ) The introduced model finds strongly clustered speech-related features in its hierarchy of latent variables building a strong case for utilizing such models for other tasks such as semi-supervised learning . This shows that LVMs without powerful autoregressive decoders for the observed variable have potential as generative speech models when using expressive hierarchies of latent variables . 2 LATENT VARIABLE MODELS FOR SPEECH . 2.1 RELATED WORK . LVMs formulated in context of the VAE framework continue to be of interest due to their ability to learn an approximation to the posterior distribution over the dataset . The posterior is usually on a significantly reduced dimensionality compared to the input data and lies very close to a known prior distribution . Such approximated posterior provides use cases for other tasks beyond generation such as semi-supervised learning ( Kingma et al. , 2014 ) and out-of-distribution detection ( Havtorn et al. , 2021 ) . Furthermore , from image modeling research , we know that powerful LVMs can achieve state-of-the-art performance without costly autoregressive dependencies on the observed variable . In recent years , there have been several complementary ways of improving the expressiveness of the VAE such as building more expressive priors through methods such as Normalizing Flows ( Rezende & Mohamed , 2015 ) and building a deeper hierarchy of stochastic latent variables such as the Ladder VAE ( Sønderby et al. , 2016 ) . In this research , we choose to focus on the latter due to the recent breakthroughs resulting in state-of-the-art VAEs without costly autoregressive dependencies on the observed variable ( Maaløe et al. , 2019 ; Vahdat & Kautz , 2020 ; Child , 2021 ) . To date , there are two widely cited , and to our knowledge state-of-the-art , explicit likelihood generative LVMs for speech : • The Variational Recurrent Neural Network ( VRNN ) ( Chung et al. , 2015 ) . • The Stochastic Recurrent Neural Network ( SRNN ) ( Fraccaro et al. , 2016 ) . Other recent LVM contributions also achieve impressive results . Among the most noteworthy are the FH-VAE ( Hsu et al. , 2017 ) , that leverages another stochastic latent variable to capture global latent features in the speech , and the VQ-VAE ( Oord et al. , 2018b ) , that introduces a hybrid between an LVM with a quantized latent space and an autoregressive model to generate improved empirical samples . However , the FH-VAE , with its disjoint latent variables , and the VQ-VAE , with its quantized latent space autoregressive prior fitted after training the encoder/decoder , introduce significant changes to the original VAE framework to function . The Stochastic WaveNet ( Lai et al. , 2018 ) and STCN ( Aksan & Hilliges , 2019 ) are fully convolutional models that resemble the VRNN . They are however only autoregressive in observed space and utilize a hierarchy of latent variables . Building on learnings from the LVM improvements in the image domain , we formulate a novel temporal LVM by introducing a hierarchy of stochastic latent variables through the adaptation of a model recently proposed for video prediction : • The Clockwork Variational Autoencoder ( CW-VAE ) ( Saxena et al. , 2021 ) . 2.2 TEMPORAL VARIATIONAL AUTOENCODING . The VRNN , SRNN and CW-VAE are all autoencoders and take as input a variable-length sequence x = ( x1 , x2 , . . . , xTx ) with xt ∈ XDx . In general , x may refer to the original observed variable or a deterministic and temporally downsampled representation of the observed variable . First , x is encoded to a temporal stochastic latent representation z = ( z1 , z2 , . . . , zTz ) with zt ∈ ZDz and length Tz ≤ Tx . This representation is then used to reconstruct the original input x . The latent variable is assumed to follow some prior distribution p ( zt|· ) . The prior distribution may depend on latent and observed variables at previous time steps , z < t and x < t , but not xt . Here we have introduced the shorthand notation z < t = ( z0 , z1 , . . . , zt−1 ) . The models are trained to maximize a likelihood objective . The exact likelihood is given by log pθ ( x ) = log ∫ pθ ( x , z ) dz , ( 1 ) but is intractable to optimize due to the integration over the latent space . Instead , the true posterior is variationally approximated by qφ ( z|x ) which yields the well-known evidence lower bound ( ELBO ) on the exact likelihood given by log pθ ( x ) ≥ Ez∼qφ ( z|x ) [ log pθ ( x , z ) − log qφ ( z|x ) ] = L ( θ , φ ; x ) , ( 2 ) with respect to { θ , φ } . We omit the θ and φ subscripts for the remainder of the paper . A graphical illustration of the models can be seen in figure 1 . 2.3 VARIATIONAL RECURRENT NEURAL NETWORK ( VRNN ) . The VRNN ( Chung et al. , 2015 ) is essentially a VAE per timestep t. Each VAE is conditioned on the hidden state of an RNN dt−1 ∈ RDd , with state transition dt = f ( [ xt−1 , zt−1 ] , dt−1 ) where [ · , · ] denotes concatenation . The joint distribution over observed and latent variables factorizes over time and the latent variables are autoregressive in both the observed and latent space : p ( x , z ) = ∏ t p ( xt|z≤t , x < t ) p ( zt|x < t , z < t ) . ( 3 ) The approximate posterior distribution similarly factorizes over time : q ( z|x ) = ∏ t q ( zt|x≤t , z < t ) . ( 4 ) The ELBO then becomes log p ( x ) ≥ Ez∼q ( z|x ) [ ∑ t log p ( xt|z≤t , x < t ) − KL ( q ( zt|x≤t , z < t ) || p ( zt|x < t , z < t ) ) ] . ( 5 ) The VRNN uses isotropic Gaussian distributions for the prior and posterior . It uses multilayer perceptrons ( MLPs ) , denoted by ϕ , to parameterize all distributions . q ( zt|x≤t , z < t ) = N ( µz , t , diag ( σ2z , t ) ) , where [ µz , t , σ 2 z , t ] = ϕencvrnn ( xt , dt ) , ( 6 ) p ( zt|x < t , z < t ) = N ( µ0 , t , diag ( σ20 , t ) ) , where [ µ0 , t , σ 2 0 , t ] = ϕpriorvrnn ( dt ) , ( 7 ) p ( xt|z≤t , x < t ) = D ( ρx , t ) , whereρx , t = ϕdecvrnn ( zt , dt ) . ( 8 ) The recurrent transition function f is parameterized by a Gated Recurrent Unit ( GRU ) ( Cho et al. , 2014 ) . At timestep zero , d0 is chosen as the zero vector . D ( ρx , t ) denotes any output distribution parameterized by a set of parameters ρx , t . We note that since the decoder is dependent on dt , the transition function f allows the VRNN to learn to ignore parts of or the entire latent variable and establish a purely deterministic transition from xt−1 to dt similar to a regular GRU ( see figure 1 ) which well-know weakness of VAEs with powerful decoders ( Bowman et al. , 2016 ; Sønderby et al. , 2016 ) . 2.4 STOCHASTIC RECURRENT NEURAL NETWORK ( SRNN ) . The SRNN ( Fraccaro et al. , 2016 ) is similar to the VRNN but differs by separating the stochastic latent variables from the deterministic representations entirely ( see figure 1 ) . In generation , this is done by having a GRU run forwards in time over the observed variable to form a deterministic representation dt from x < t . The latent variable is then sampled from the prior p ( zt|x < t , z < t ) which is conditioned directly on the previous latent variable . The SRNN also uses a more intricate inference network which essentially learns to solve a smoothing problem rather than a filtering problem by also running backwards in time . Specifically , in the smoothing configuration , the inference model q ( zt|x≤t , z < t ) includes an additional deterministic variable computed from dt and xt by a GRU running backwards in time i.e . at = f rev ( [ xt , dt ] , at+1 ) . In the filtering configuration , this is replaced with an MLP , at = f rev ( [ xt , dt ] ) . The encoding distribution q ( zt|x≤t , z < t ) is then conditioned on at , ϕencsrnn ( xt , at ) . In our experiments we run the SRNN in the smoothing configuration . | This paper presents a variant of stochastic sequence neural network, the family of VRNN and SRNN. This paper adopts the CW-VAE framework and completes the optimization process under the stochastic sequence neural network framework. The authors test it on the speech domain. The experiments show that it outperforms VRNN and SRNN in the benchmark datasets. | SP:0c68060a24ee60fd8bd25202ddf59300a58bb952 |
Towards Generative Latent Variable Models for Speech | 1 INTRODUCTION . With the introduction of the variational autoencoder ( VAE ) ( Kingma & Welling , 2014 ; Rezende et al. , 2014 ) came two rapid extensions for modeling speech data ( Chung et al. , 2015 ; Fraccaro et al. , 2016 ) . Since then , temporal LVMs have undergone little development and their autoregressive counterparts , such as the WaveNet ( Oord et al. , 2016a ) , remain state-of-the-art . In the image domain , generative LVMs have recently shown superior performance to the PixelCNN ( Oord et al. , 2016c ; b ; Salimans et al. , 2017 ) , the model that built the foundation for WaveNet . The improvements in performance have primarily been driven by altered inference models , including top-down ( Sønderby et al. , 2016 ) and bidirectional inference ( Maaløe et al. , 2019 ) , deeper latent hierarchies and skip connections ( Sønderby et al. , 2016 ; Maaløe et al. , 2019 ; Vahdat & Kautz , 2020 ; Child , 2021 ) . To innovate and compare LVMs we need good baselining , similar to the many reported benchmarks within the image domain . However , research in the speech domain has often omitted reporting a likelihood ( Oord et al. , 2016a ; Hsu et al. , 2017 ; Oord et al. , 2018b ) or has reported likelihoods that are incomparable due to subtle differences in the model parameterizations ( Chung et al. , 2015 ; Fraccaro et al. , 2016 ; Hsu et al. , 2017 ; Aksan & Hilliges , 2019 ) . Without a proper comparison standard , it is difficult for the field of explicit likelihood models on speech to evolve further . This research pushes forward the state of the LVM on speech by ( i ) properly benchmarking previous models , ( ii ) introducing a high-performing hierarchical temporal LVM architecture , and ( iii ) analyzing the representations of the latent variables . We find that : ( I ) Previous state-of-the-art LVMs achieve close to identical likelihood performance , still significantly inferior to the WaveNet . Interestingly , we also find that the WaveNet performs almost identically to a standard LSTM parameterization ( Hochreiter & Schmidhuber , 1997 ) but surprisingly worse than the lossless compression algorithm FLAC . ( II ) Similar to conclusions within image modeling ( Maaløe et al. , 2019 ; Vahdat & Kautz , 2020 ; Child , 2021 ) , the LVM expressiveness increases with a deeper hierarchy of stochastic latent variables . In direct comparisons , the introduced model outperforms its deterministic counterparts . However , due to computational cost , it remains infeasible to run the model on the same setting as a state-of-the-art WaveNet model . p ( xt|x < t ) LSTM p ( xt|z≤t , x < t ) p ( zt|x < t , z < t ) VRNN p ( xt|z≤t , x < t ) p ( zt|x < t , z < t ) SRNN p ( xt|z≤t ) p ( zt|z < t ) CW-VAE ( III ) The introduced model finds strongly clustered speech-related features in its hierarchy of latent variables building a strong case for utilizing such models for other tasks such as semi-supervised learning . This shows that LVMs without powerful autoregressive decoders for the observed variable have potential as generative speech models when using expressive hierarchies of latent variables . 2 LATENT VARIABLE MODELS FOR SPEECH . 2.1 RELATED WORK . LVMs formulated in context of the VAE framework continue to be of interest due to their ability to learn an approximation to the posterior distribution over the dataset . The posterior is usually on a significantly reduced dimensionality compared to the input data and lies very close to a known prior distribution . Such approximated posterior provides use cases for other tasks beyond generation such as semi-supervised learning ( Kingma et al. , 2014 ) and out-of-distribution detection ( Havtorn et al. , 2021 ) . Furthermore , from image modeling research , we know that powerful LVMs can achieve state-of-the-art performance without costly autoregressive dependencies on the observed variable . In recent years , there have been several complementary ways of improving the expressiveness of the VAE such as building more expressive priors through methods such as Normalizing Flows ( Rezende & Mohamed , 2015 ) and building a deeper hierarchy of stochastic latent variables such as the Ladder VAE ( Sønderby et al. , 2016 ) . In this research , we choose to focus on the latter due to the recent breakthroughs resulting in state-of-the-art VAEs without costly autoregressive dependencies on the observed variable ( Maaløe et al. , 2019 ; Vahdat & Kautz , 2020 ; Child , 2021 ) . To date , there are two widely cited , and to our knowledge state-of-the-art , explicit likelihood generative LVMs for speech : • The Variational Recurrent Neural Network ( VRNN ) ( Chung et al. , 2015 ) . • The Stochastic Recurrent Neural Network ( SRNN ) ( Fraccaro et al. , 2016 ) . Other recent LVM contributions also achieve impressive results . Among the most noteworthy are the FH-VAE ( Hsu et al. , 2017 ) , that leverages another stochastic latent variable to capture global latent features in the speech , and the VQ-VAE ( Oord et al. , 2018b ) , that introduces a hybrid between an LVM with a quantized latent space and an autoregressive model to generate improved empirical samples . However , the FH-VAE , with its disjoint latent variables , and the VQ-VAE , with its quantized latent space autoregressive prior fitted after training the encoder/decoder , introduce significant changes to the original VAE framework to function . The Stochastic WaveNet ( Lai et al. , 2018 ) and STCN ( Aksan & Hilliges , 2019 ) are fully convolutional models that resemble the VRNN . They are however only autoregressive in observed space and utilize a hierarchy of latent variables . Building on learnings from the LVM improvements in the image domain , we formulate a novel temporal LVM by introducing a hierarchy of stochastic latent variables through the adaptation of a model recently proposed for video prediction : • The Clockwork Variational Autoencoder ( CW-VAE ) ( Saxena et al. , 2021 ) . 2.2 TEMPORAL VARIATIONAL AUTOENCODING . The VRNN , SRNN and CW-VAE are all autoencoders and take as input a variable-length sequence x = ( x1 , x2 , . . . , xTx ) with xt ∈ XDx . In general , x may refer to the original observed variable or a deterministic and temporally downsampled representation of the observed variable . First , x is encoded to a temporal stochastic latent representation z = ( z1 , z2 , . . . , zTz ) with zt ∈ ZDz and length Tz ≤ Tx . This representation is then used to reconstruct the original input x . The latent variable is assumed to follow some prior distribution p ( zt|· ) . The prior distribution may depend on latent and observed variables at previous time steps , z < t and x < t , but not xt . Here we have introduced the shorthand notation z < t = ( z0 , z1 , . . . , zt−1 ) . The models are trained to maximize a likelihood objective . The exact likelihood is given by log pθ ( x ) = log ∫ pθ ( x , z ) dz , ( 1 ) but is intractable to optimize due to the integration over the latent space . Instead , the true posterior is variationally approximated by qφ ( z|x ) which yields the well-known evidence lower bound ( ELBO ) on the exact likelihood given by log pθ ( x ) ≥ Ez∼qφ ( z|x ) [ log pθ ( x , z ) − log qφ ( z|x ) ] = L ( θ , φ ; x ) , ( 2 ) with respect to { θ , φ } . We omit the θ and φ subscripts for the remainder of the paper . A graphical illustration of the models can be seen in figure 1 . 2.3 VARIATIONAL RECURRENT NEURAL NETWORK ( VRNN ) . The VRNN ( Chung et al. , 2015 ) is essentially a VAE per timestep t. Each VAE is conditioned on the hidden state of an RNN dt−1 ∈ RDd , with state transition dt = f ( [ xt−1 , zt−1 ] , dt−1 ) where [ · , · ] denotes concatenation . The joint distribution over observed and latent variables factorizes over time and the latent variables are autoregressive in both the observed and latent space : p ( x , z ) = ∏ t p ( xt|z≤t , x < t ) p ( zt|x < t , z < t ) . ( 3 ) The approximate posterior distribution similarly factorizes over time : q ( z|x ) = ∏ t q ( zt|x≤t , z < t ) . ( 4 ) The ELBO then becomes log p ( x ) ≥ Ez∼q ( z|x ) [ ∑ t log p ( xt|z≤t , x < t ) − KL ( q ( zt|x≤t , z < t ) || p ( zt|x < t , z < t ) ) ] . ( 5 ) The VRNN uses isotropic Gaussian distributions for the prior and posterior . It uses multilayer perceptrons ( MLPs ) , denoted by ϕ , to parameterize all distributions . q ( zt|x≤t , z < t ) = N ( µz , t , diag ( σ2z , t ) ) , where [ µz , t , σ 2 z , t ] = ϕencvrnn ( xt , dt ) , ( 6 ) p ( zt|x < t , z < t ) = N ( µ0 , t , diag ( σ20 , t ) ) , where [ µ0 , t , σ 2 0 , t ] = ϕpriorvrnn ( dt ) , ( 7 ) p ( xt|z≤t , x < t ) = D ( ρx , t ) , whereρx , t = ϕdecvrnn ( zt , dt ) . ( 8 ) The recurrent transition function f is parameterized by a Gated Recurrent Unit ( GRU ) ( Cho et al. , 2014 ) . At timestep zero , d0 is chosen as the zero vector . D ( ρx , t ) denotes any output distribution parameterized by a set of parameters ρx , t . We note that since the decoder is dependent on dt , the transition function f allows the VRNN to learn to ignore parts of or the entire latent variable and establish a purely deterministic transition from xt−1 to dt similar to a regular GRU ( see figure 1 ) which well-know weakness of VAEs with powerful decoders ( Bowman et al. , 2016 ; Sønderby et al. , 2016 ) . 2.4 STOCHASTIC RECURRENT NEURAL NETWORK ( SRNN ) . The SRNN ( Fraccaro et al. , 2016 ) is similar to the VRNN but differs by separating the stochastic latent variables from the deterministic representations entirely ( see figure 1 ) . In generation , this is done by having a GRU run forwards in time over the observed variable to form a deterministic representation dt from x < t . The latent variable is then sampled from the prior p ( zt|x < t , z < t ) which is conditioned directly on the previous latent variable . The SRNN also uses a more intricate inference network which essentially learns to solve a smoothing problem rather than a filtering problem by also running backwards in time . Specifically , in the smoothing configuration , the inference model q ( zt|x≤t , z < t ) includes an additional deterministic variable computed from dt and xt by a GRU running backwards in time i.e . at = f rev ( [ xt , dt ] , at+1 ) . In the filtering configuration , this is replaced with an MLP , at = f rev ( [ xt , dt ] ) . The encoding distribution q ( zt|x≤t , z < t ) is then conditioned on at , ϕencsrnn ( xt , at ) . In our experiments we run the SRNN in the smoothing configuration . | This paper proposes to put various models under the same experimental setting and compare their rate at compressing speech. The models of choice are vanila LSTMs, variational RNNs, stochastic RNNs, Clockwork VAEs, and WaveNets. The results are also compared against regular compression algorithms, for example, FLAC. | SP:0c68060a24ee60fd8bd25202ddf59300a58bb952 |
Towards Generative Latent Variable Models for Speech | 1 INTRODUCTION . With the introduction of the variational autoencoder ( VAE ) ( Kingma & Welling , 2014 ; Rezende et al. , 2014 ) came two rapid extensions for modeling speech data ( Chung et al. , 2015 ; Fraccaro et al. , 2016 ) . Since then , temporal LVMs have undergone little development and their autoregressive counterparts , such as the WaveNet ( Oord et al. , 2016a ) , remain state-of-the-art . In the image domain , generative LVMs have recently shown superior performance to the PixelCNN ( Oord et al. , 2016c ; b ; Salimans et al. , 2017 ) , the model that built the foundation for WaveNet . The improvements in performance have primarily been driven by altered inference models , including top-down ( Sønderby et al. , 2016 ) and bidirectional inference ( Maaløe et al. , 2019 ) , deeper latent hierarchies and skip connections ( Sønderby et al. , 2016 ; Maaløe et al. , 2019 ; Vahdat & Kautz , 2020 ; Child , 2021 ) . To innovate and compare LVMs we need good baselining , similar to the many reported benchmarks within the image domain . However , research in the speech domain has often omitted reporting a likelihood ( Oord et al. , 2016a ; Hsu et al. , 2017 ; Oord et al. , 2018b ) or has reported likelihoods that are incomparable due to subtle differences in the model parameterizations ( Chung et al. , 2015 ; Fraccaro et al. , 2016 ; Hsu et al. , 2017 ; Aksan & Hilliges , 2019 ) . Without a proper comparison standard , it is difficult for the field of explicit likelihood models on speech to evolve further . This research pushes forward the state of the LVM on speech by ( i ) properly benchmarking previous models , ( ii ) introducing a high-performing hierarchical temporal LVM architecture , and ( iii ) analyzing the representations of the latent variables . We find that : ( I ) Previous state-of-the-art LVMs achieve close to identical likelihood performance , still significantly inferior to the WaveNet . Interestingly , we also find that the WaveNet performs almost identically to a standard LSTM parameterization ( Hochreiter & Schmidhuber , 1997 ) but surprisingly worse than the lossless compression algorithm FLAC . ( II ) Similar to conclusions within image modeling ( Maaløe et al. , 2019 ; Vahdat & Kautz , 2020 ; Child , 2021 ) , the LVM expressiveness increases with a deeper hierarchy of stochastic latent variables . In direct comparisons , the introduced model outperforms its deterministic counterparts . However , due to computational cost , it remains infeasible to run the model on the same setting as a state-of-the-art WaveNet model . p ( xt|x < t ) LSTM p ( xt|z≤t , x < t ) p ( zt|x < t , z < t ) VRNN p ( xt|z≤t , x < t ) p ( zt|x < t , z < t ) SRNN p ( xt|z≤t ) p ( zt|z < t ) CW-VAE ( III ) The introduced model finds strongly clustered speech-related features in its hierarchy of latent variables building a strong case for utilizing such models for other tasks such as semi-supervised learning . This shows that LVMs without powerful autoregressive decoders for the observed variable have potential as generative speech models when using expressive hierarchies of latent variables . 2 LATENT VARIABLE MODELS FOR SPEECH . 2.1 RELATED WORK . LVMs formulated in context of the VAE framework continue to be of interest due to their ability to learn an approximation to the posterior distribution over the dataset . The posterior is usually on a significantly reduced dimensionality compared to the input data and lies very close to a known prior distribution . Such approximated posterior provides use cases for other tasks beyond generation such as semi-supervised learning ( Kingma et al. , 2014 ) and out-of-distribution detection ( Havtorn et al. , 2021 ) . Furthermore , from image modeling research , we know that powerful LVMs can achieve state-of-the-art performance without costly autoregressive dependencies on the observed variable . In recent years , there have been several complementary ways of improving the expressiveness of the VAE such as building more expressive priors through methods such as Normalizing Flows ( Rezende & Mohamed , 2015 ) and building a deeper hierarchy of stochastic latent variables such as the Ladder VAE ( Sønderby et al. , 2016 ) . In this research , we choose to focus on the latter due to the recent breakthroughs resulting in state-of-the-art VAEs without costly autoregressive dependencies on the observed variable ( Maaløe et al. , 2019 ; Vahdat & Kautz , 2020 ; Child , 2021 ) . To date , there are two widely cited , and to our knowledge state-of-the-art , explicit likelihood generative LVMs for speech : • The Variational Recurrent Neural Network ( VRNN ) ( Chung et al. , 2015 ) . • The Stochastic Recurrent Neural Network ( SRNN ) ( Fraccaro et al. , 2016 ) . Other recent LVM contributions also achieve impressive results . Among the most noteworthy are the FH-VAE ( Hsu et al. , 2017 ) , that leverages another stochastic latent variable to capture global latent features in the speech , and the VQ-VAE ( Oord et al. , 2018b ) , that introduces a hybrid between an LVM with a quantized latent space and an autoregressive model to generate improved empirical samples . However , the FH-VAE , with its disjoint latent variables , and the VQ-VAE , with its quantized latent space autoregressive prior fitted after training the encoder/decoder , introduce significant changes to the original VAE framework to function . The Stochastic WaveNet ( Lai et al. , 2018 ) and STCN ( Aksan & Hilliges , 2019 ) are fully convolutional models that resemble the VRNN . They are however only autoregressive in observed space and utilize a hierarchy of latent variables . Building on learnings from the LVM improvements in the image domain , we formulate a novel temporal LVM by introducing a hierarchy of stochastic latent variables through the adaptation of a model recently proposed for video prediction : • The Clockwork Variational Autoencoder ( CW-VAE ) ( Saxena et al. , 2021 ) . 2.2 TEMPORAL VARIATIONAL AUTOENCODING . The VRNN , SRNN and CW-VAE are all autoencoders and take as input a variable-length sequence x = ( x1 , x2 , . . . , xTx ) with xt ∈ XDx . In general , x may refer to the original observed variable or a deterministic and temporally downsampled representation of the observed variable . First , x is encoded to a temporal stochastic latent representation z = ( z1 , z2 , . . . , zTz ) with zt ∈ ZDz and length Tz ≤ Tx . This representation is then used to reconstruct the original input x . The latent variable is assumed to follow some prior distribution p ( zt|· ) . The prior distribution may depend on latent and observed variables at previous time steps , z < t and x < t , but not xt . Here we have introduced the shorthand notation z < t = ( z0 , z1 , . . . , zt−1 ) . The models are trained to maximize a likelihood objective . The exact likelihood is given by log pθ ( x ) = log ∫ pθ ( x , z ) dz , ( 1 ) but is intractable to optimize due to the integration over the latent space . Instead , the true posterior is variationally approximated by qφ ( z|x ) which yields the well-known evidence lower bound ( ELBO ) on the exact likelihood given by log pθ ( x ) ≥ Ez∼qφ ( z|x ) [ log pθ ( x , z ) − log qφ ( z|x ) ] = L ( θ , φ ; x ) , ( 2 ) with respect to { θ , φ } . We omit the θ and φ subscripts for the remainder of the paper . A graphical illustration of the models can be seen in figure 1 . 2.3 VARIATIONAL RECURRENT NEURAL NETWORK ( VRNN ) . The VRNN ( Chung et al. , 2015 ) is essentially a VAE per timestep t. Each VAE is conditioned on the hidden state of an RNN dt−1 ∈ RDd , with state transition dt = f ( [ xt−1 , zt−1 ] , dt−1 ) where [ · , · ] denotes concatenation . The joint distribution over observed and latent variables factorizes over time and the latent variables are autoregressive in both the observed and latent space : p ( x , z ) = ∏ t p ( xt|z≤t , x < t ) p ( zt|x < t , z < t ) . ( 3 ) The approximate posterior distribution similarly factorizes over time : q ( z|x ) = ∏ t q ( zt|x≤t , z < t ) . ( 4 ) The ELBO then becomes log p ( x ) ≥ Ez∼q ( z|x ) [ ∑ t log p ( xt|z≤t , x < t ) − KL ( q ( zt|x≤t , z < t ) || p ( zt|x < t , z < t ) ) ] . ( 5 ) The VRNN uses isotropic Gaussian distributions for the prior and posterior . It uses multilayer perceptrons ( MLPs ) , denoted by ϕ , to parameterize all distributions . q ( zt|x≤t , z < t ) = N ( µz , t , diag ( σ2z , t ) ) , where [ µz , t , σ 2 z , t ] = ϕencvrnn ( xt , dt ) , ( 6 ) p ( zt|x < t , z < t ) = N ( µ0 , t , diag ( σ20 , t ) ) , where [ µ0 , t , σ 2 0 , t ] = ϕpriorvrnn ( dt ) , ( 7 ) p ( xt|z≤t , x < t ) = D ( ρx , t ) , whereρx , t = ϕdecvrnn ( zt , dt ) . ( 8 ) The recurrent transition function f is parameterized by a Gated Recurrent Unit ( GRU ) ( Cho et al. , 2014 ) . At timestep zero , d0 is chosen as the zero vector . D ( ρx , t ) denotes any output distribution parameterized by a set of parameters ρx , t . We note that since the decoder is dependent on dt , the transition function f allows the VRNN to learn to ignore parts of or the entire latent variable and establish a purely deterministic transition from xt−1 to dt similar to a regular GRU ( see figure 1 ) which well-know weakness of VAEs with powerful decoders ( Bowman et al. , 2016 ; Sønderby et al. , 2016 ) . 2.4 STOCHASTIC RECURRENT NEURAL NETWORK ( SRNN ) . The SRNN ( Fraccaro et al. , 2016 ) is similar to the VRNN but differs by separating the stochastic latent variables from the deterministic representations entirely ( see figure 1 ) . In generation , this is done by having a GRU run forwards in time over the observed variable to form a deterministic representation dt from x < t . The latent variable is then sampled from the prior p ( zt|x < t , z < t ) which is conditioned directly on the previous latent variable . The SRNN also uses a more intricate inference network which essentially learns to solve a smoothing problem rather than a filtering problem by also running backwards in time . Specifically , in the smoothing configuration , the inference model q ( zt|x≤t , z < t ) includes an additional deterministic variable computed from dt and xt by a GRU running backwards in time i.e . at = f rev ( [ xt , dt ] , at+1 ) . In the filtering configuration , this is replaced with an MLP , at = f rev ( [ xt , dt ] ) . The encoding distribution q ( zt|x≤t , z < t ) is then conditioned on at , ϕencsrnn ( xt , at ) . In our experiments we run the SRNN in the smoothing configuration . | This paper presents an exploration of the use of latent variable models as generative models of speech. Noting that such models work well in the image space, but not so much in the speech space, the authors move on to adapt the Clockwork VAE (a video LVM) as a speech model. In the process the authors present a series of useful technical solutions to various issues that arise in this domain transition. This, and other generative models of speech are later compared in the experiments section. The results show that this approach is potentially viable. The performance of the proposed speech LVM is good, albeit it comes with increased computational complexity (hopefully something to solve in the future). In addition, it is shown that the resulting latent representation is correlated with phonetic structure, which is a pleasant bonus that other speech generative models (e.g. WaveNet) lack. | SP:0c68060a24ee60fd8bd25202ddf59300a58bb952 |
Learning Prototype-oriented Set Representations for Meta-Learning | 1 INTRODUCTION . Machine learning models , such as convolutional neural networks for images ( He et al . 2016 ) and recurrent neural networks for sequential data ( Sutskever et al . 2014 ) , have achieved great success in taking advantage of the structure in the input space ( Maron et al . 2020 ) . However , extending them to handle unstructured input in the form of sets , where a set can be defined as an unordered collections of elements , is not trivial and has recently attracted increasing attention ( Jurewicz & StrømbergDerczynski , 2021 ) . Set-input is relevant to a range of problems , such as understanding a scene formed of a set of objects ( Eslami et al . 2016 ) , classifying an object composed of a set of 3D points ( Qi et al . 2017 ) , and estimating summary statistics from a set of data points for implicit generative models ( Chen et al . 2021 ) . Moreover , many meta-learning problems , which process different but related tasks , may also be viewed as set-input tasks ( Lee et al. , 2019 ) , where an input set corresponds to the training dataset of a single task . Therefore , we broaden the scope of set related applications by including traditional set-structured input problems and most meta-learning problems . Both of them aim to improve the quick adaptation ability for unseen sets , even though the latter is more difficult because of limited samples or the occurrence of new categories for classification problems . For a set-input , the output of the model must not change if the elements of the input set are reordered , which entails permutation invariance of the model . To enforce this property , multiple researchers have recently focused on designing different network architectures , which can be referred to as a summary network for compressing the set-structured data into a fixed-size output . For example , the prominent works of Zaheer et al . ( 2017 ) and Edwards & Storkey ( 2017 ) combined the standard feed-forward neural networks with a set-pooling layer , which have been proven to be universal approximators of continuous permutation invariant functions ( Zaheer et al . 2017 ) . Lee et al . ( 2019 ) further introduced Set Transformer to encode and aggregate the features within the set using the multi-head attention mechanism . Maron et al . ( 2020 ) designed deep models and presented a principled approach to learn sets of symmetric elements . Despite the effectiveness and recent popularity of these works in set-input problems , there are several shortcomings for existing summary networks , which could hinder their applicability and further extensions : 1 ) The parameters of the summary network are typically optimized by a task-specific loss function , which could limit the models ’ flexibility . 2 ) A desideratum of a summary network is to extract set features , which have enough ability to represent the summary statistics of the input set and thus benefit the corresponding set-specific task ; but for many existing summary networks , there is no clear evidence or constraint that the outputs of the summary network could describe the set ’ s summary statistics well . These limits still remain even using the recent more carefully designed summary networks , while sets with limited samples further exacerbate the problem . To address the above shortcomings , we present a novel and generic approach to improve the summary networks for set-structured data and adapt them to meta-learning problems . Motivated by meta-learning that aims to extract transferable patterns useful for all related tasks , we assume that there are K global prototypes ( i.e. , centers ) among the collection of related sets , and each prototype or center is encouraged to capture the statistical information shared by those sets , similar to the “ topic ” in topic modeling or “ dictionary atom ” in dictionary learning . Specifically , for the jth set , we consider it as a discrete distribution Pj over all the samples within the set ( in data or feature space ) . At the same time , we also represent this set with another distribution Qj ( in the same space with Pj ) , supported on K global prototypes with a K-dimensional set representation hj . Since hj measures the importance of global prototypes for set j , it can be treated as the prototype proportion for summarizing the salient characteristics of set j . Moreover , the existing summary networks can be adopted to encode set j as hj for their desired property of permutation invariance . In this way , we can formulate the learning of summary networks as the process of learning a Pj to be as close to Qj as possible , a process facilitated by leveraging the optimal transport ( OT ) distance ( Peyré & Cuturi 2019 ) . Therefore , the global prototypes and summary network can be learned by jointly optimizing the task-specific loss and OT distance between Pj and Qj in an end-to-end manner . We can refer to this method as prototype-oriented framework for meta-learning , which is applicable to a range of unsupervised and supervised tasks , such as set-input problems solved by summary networks , meta generation ( Hong et al. , 2020c ; Antoniou et al. , 2017 ) , metric-based few-shot classification ( Snell et al. , 2017 ) , and learning statistics for approximate Bayesian computation ( Chen et al. , 2021 ) . Since our plug-and-play framework can be applied to many meta-learning problems , this paper further instantiates it to the cases of metric-based few-shot classification and implicit meta generative modeling . We summarize our contributions as follows : ( 1 ) We formulate the learning of summary network as the distribution approximation problem by minimizing the distance between distribution over data points and another distribution over global prototypes . ( 2 ) We leverage optimal transport distance to measure the distance between the distributions for use in a joint learning algorithm . ( 3 ) We apply our method to metric-based few-shot classification and construct implicit meta generative model , where summary network is used to extract the summary statistics from set and optimized by the OT loss . Experiments on several meta-learning tasks demonstrate that introducing the OT loss into existing summary networks can extract more effective set representations for the corresponding tasks , which can be also integrated into existing few-shot classification and GAN frameworks , producing a new way to learn the set ’ summary statistics applicable to many applications . 2 BACKGROUND . 2.1 SUMMARY NETWORKS FOR SET-STRUCTURED INPUT . To deal with the set-structured input Dj = { xj,1 : Nj } and satisfy the permutation invariance in set , a remarkably simple but effective summary network is to perform pooling over embedding vectors extracted from the elements of a set . More formally , Sφ ( Dj ) = gφ2 ( pool ( { fφ1 ( xj1 ) , . . . , fφ1 ( xjNj ) } ) ) , ( 1 ) where fφ1 ( · ) acts on each element of a set and gφ2 ( pool ( · ) ) aggregates these encoded features and produces desired output , and φ = { φ1 , φ2 } denotes the parameters of the summary network . Most network architectures for set-structured data follow this structure ; see more details from the previous works ( Lee et al. , 2019 ; Zaheer et al. , 2017 ; Edwards & Storkey , 2017 ; Maron et al. , 2020 ) . 2.2 OPTIMAL TRANSPORT . Although OT has a rich theory , we limit our discussion to OT for discrete distributions and refer the reader to Peyré & Cuturi ( 2019 ) for more details . Let us consider p and q as two discrete probability distributions on the arbitrary space X ⊆ Rd , which can be formulated as p = ∑n i=1 aiδxi and q = ∑m j=1 bjδyj . In this case , a ∈ Σn and b ∈ Σm , where Σm denotes the probability simplex of Rm . The OT distance between a and b is defined as OT ( a , b ) = min T∈U ( a , b ) 〈T , C〉 , ( 2 ) where 〈· , ·〉means the Frobenius dot-product ; C ∈ Rn×m≥0 is the transport cost function with element Cij = C ( xi , yj ) ; T ∈ Rn×m > 0 denotes the doubly stochastic transport probability matrix such that U ( a , b ) : = { T | ∑n i Tij = bj , ∑m j Tij = ai } . To relax the time-consuming problem when opti- mising the OT distance , Cuturi ( 2013 ) introduced the entropic regularization , H = − ∑ ij Tij lnTij , leading to the widely-used Sinkhorn algorithm for discrete OT problems . 3 PROPOSED FRAMEWORK . In meta-learning , given a meta-distribution pM of tasks , the marginal distribution pj of task j is sampled from pM for j ∈ J , where J denotes a finite set of indices . For example , we can sample pj from pM with probability 1J when pM is uniform over a finite number of marginals . During meta-training , direct access to the distribution of interest pj is usually not available . Instead , we will observe a set of data points Dj = { xji } Nji=1 , which consists of Nj i.i.d samples from pj over R d. We can roughly treat the meta-learning problems as the set-input tasks , where dataset Dj from pj corresponds to an input set . To learn more representative features from related but unseen sets in meta-learning problems , we adopt the summary network as the encoder to extract set representations and improve it by introducing the OT loss and global prototypes , providing many applications . Besides , we also provide the applications to metric-based few-shot classification and implicit generative framework by assimilating the summary statistics . Below we describe our model in detail . 3.1 LEARNING GLOBAL PROTOTYPES AND SET REPRESENTATION VIA OPTIMAL TRANSPORT . Given J sets from meta-distribution pM , we can represent each set Dj from meta-distribution pM as an empirical distribution over Nj samples on the original data space , formulated as Pj = ∑Nj i=1 1 Nj δxji , xji ∈ Rd . ( 3 ) Since all sets ( distributions ) draw from meta-distribution pM are closely related , it is reasonable to assume that these sets share some statistical information . Motivated by dictionary learning and topic modeling , we define the shared information as the learnable global prototype matrix B = { βk } ∈ Rd×K , where K means the number of global prototypes and βk denotes the distributed representation of the k-th prototype in the same space of the observed data points ( i.e. , “ topic ” in topic modeling ) . Given the global prototype matrix B , each set j can be represented with a K-dimensional weight vector hj ∈ Σk ( e.g. , “ topic proportion ” in topic modeling ) , where hjk indicates the weight of the prototype βk for set j . Therefore , we can represent set Dj with another distribution Qj on global prototypes β1 : K , defined as Qj = ∑K k=1 hjkδβk , βk ∈ Rd , ( 4 ) where hj can be viewed as set representation for describing set j . Since set j can be represented as Qj and Pj , we propose to learn set-specific representation hj andK global prototypes B by pushing Qj towards Pj : OT ( Pj , Qj ) = min B , hj 〈T , C〉 def.= Nj∑ i K∑ k CikTik , ( 5 ) where C ∈ RNj×K≥0 is the transport cost matrix . In this paper , to measure the distance between data point xji in set j and prototype βk , unless specified otherwise , we specify the construction of C as Cik = 1 − cos ( xji , βk ) , for providing an upper-bounded positive similarity metric . Besides , the transport probability matrix T ∈ RNj×K > 0 should satisfy Π ( a , b ) : = { T | T1K = a , T > 1Nj = b } with Tik = T ( xji , βk ) , where a = [ 1Nj ] ∈ Σ Nj and b = [ hjk ] ∈ ΣK denote the respective probability vectors for distribution Pj in Equation 3 and Qj Equation 4 . Since hj should be invariant to the permutations of the samples in set j , we adopt summary network to encode the set of Nj points . For unsupervised tasks , taking the summary network in Equation 1 as the example , we can directly add a Softmax activation function into Sφ to enforce the simplex constraint in set representation hj , denoted as hj = Softmax ( Sφ ( Dj ) ) . As shown in Fig . 1 , given J sets , to learn the global prototype matrix B and the summary network parameterized by φ , we adopt the entropic constraint ( Cuturi , 2013 ) and define the average OT loss for all training sets as LOT = min B , φ 1 J J∑ j=1 Nj∑ i K∑ k CikTik − Nj∑ i K∑ k −TiklnTik = min B , φ 1 J J∑ j=1 ( OT ( Pj , Qj ) ) , ( 6 ) where is a hyper-parameter for entropic constraint . Algorithm 1 describes the workflow of OT distance for improving summary network under unsupervised tasks . For supervised tasks , set j is denoted as Dj = { xj,1 : Nj , yj } , where yj is the ground-truth output determined by specific task . As hj is a normalized weight vector , directly using it to realize the corresponding task may be undesired . Denoting zj=pool ( { fφ1 ( xj1 ) , . . . , fφ1 ( xjNj ) } ) , we project it to the following vectors : hj = fe ( zj ) , ŷj = fλ ( zj ) , ( 7 ) where hj and ŷj are responsible for the OT and task-specific losses , respectively . Now the summary network parameters φ̃= { e , λ , φ1 } and global prototypes B can be learned by jointly optimizing the task-specific loss ( computed by ŷj and yj ) and OT loss in Equation F. In summary , minimizing the OT distance between the prototype distribution Qj and empirical distribution Pj provides a principled and unsupervised way to encourage the summary network to capture the set ’ s summary statistics , where a suite of summary networks can be integrated into this plug-and-play framework . Therefore , it realizes efficient learning from new sets for both unsupervised and supervised tasks . | This work proposes a method to improve set representation, and applies it in the context of meta-learning. More precisely, the method consists in jointly learning a summary network and prototypes using an optimal transport loss: the prototypes and summary network output should minimize the sum of the optimal transport costs for all the sets of the meta dataset. Experiments are conducted in the context of few-shot learning and other tasks used to evaluate summary networks. | SP:9b1de124c06588bfe9b703a413a3ffa915729ce4 |
Learning Prototype-oriented Set Representations for Meta-Learning | 1 INTRODUCTION . Machine learning models , such as convolutional neural networks for images ( He et al . 2016 ) and recurrent neural networks for sequential data ( Sutskever et al . 2014 ) , have achieved great success in taking advantage of the structure in the input space ( Maron et al . 2020 ) . However , extending them to handle unstructured input in the form of sets , where a set can be defined as an unordered collections of elements , is not trivial and has recently attracted increasing attention ( Jurewicz & StrømbergDerczynski , 2021 ) . Set-input is relevant to a range of problems , such as understanding a scene formed of a set of objects ( Eslami et al . 2016 ) , classifying an object composed of a set of 3D points ( Qi et al . 2017 ) , and estimating summary statistics from a set of data points for implicit generative models ( Chen et al . 2021 ) . Moreover , many meta-learning problems , which process different but related tasks , may also be viewed as set-input tasks ( Lee et al. , 2019 ) , where an input set corresponds to the training dataset of a single task . Therefore , we broaden the scope of set related applications by including traditional set-structured input problems and most meta-learning problems . Both of them aim to improve the quick adaptation ability for unseen sets , even though the latter is more difficult because of limited samples or the occurrence of new categories for classification problems . For a set-input , the output of the model must not change if the elements of the input set are reordered , which entails permutation invariance of the model . To enforce this property , multiple researchers have recently focused on designing different network architectures , which can be referred to as a summary network for compressing the set-structured data into a fixed-size output . For example , the prominent works of Zaheer et al . ( 2017 ) and Edwards & Storkey ( 2017 ) combined the standard feed-forward neural networks with a set-pooling layer , which have been proven to be universal approximators of continuous permutation invariant functions ( Zaheer et al . 2017 ) . Lee et al . ( 2019 ) further introduced Set Transformer to encode and aggregate the features within the set using the multi-head attention mechanism . Maron et al . ( 2020 ) designed deep models and presented a principled approach to learn sets of symmetric elements . Despite the effectiveness and recent popularity of these works in set-input problems , there are several shortcomings for existing summary networks , which could hinder their applicability and further extensions : 1 ) The parameters of the summary network are typically optimized by a task-specific loss function , which could limit the models ’ flexibility . 2 ) A desideratum of a summary network is to extract set features , which have enough ability to represent the summary statistics of the input set and thus benefit the corresponding set-specific task ; but for many existing summary networks , there is no clear evidence or constraint that the outputs of the summary network could describe the set ’ s summary statistics well . These limits still remain even using the recent more carefully designed summary networks , while sets with limited samples further exacerbate the problem . To address the above shortcomings , we present a novel and generic approach to improve the summary networks for set-structured data and adapt them to meta-learning problems . Motivated by meta-learning that aims to extract transferable patterns useful for all related tasks , we assume that there are K global prototypes ( i.e. , centers ) among the collection of related sets , and each prototype or center is encouraged to capture the statistical information shared by those sets , similar to the “ topic ” in topic modeling or “ dictionary atom ” in dictionary learning . Specifically , for the jth set , we consider it as a discrete distribution Pj over all the samples within the set ( in data or feature space ) . At the same time , we also represent this set with another distribution Qj ( in the same space with Pj ) , supported on K global prototypes with a K-dimensional set representation hj . Since hj measures the importance of global prototypes for set j , it can be treated as the prototype proportion for summarizing the salient characteristics of set j . Moreover , the existing summary networks can be adopted to encode set j as hj for their desired property of permutation invariance . In this way , we can formulate the learning of summary networks as the process of learning a Pj to be as close to Qj as possible , a process facilitated by leveraging the optimal transport ( OT ) distance ( Peyré & Cuturi 2019 ) . Therefore , the global prototypes and summary network can be learned by jointly optimizing the task-specific loss and OT distance between Pj and Qj in an end-to-end manner . We can refer to this method as prototype-oriented framework for meta-learning , which is applicable to a range of unsupervised and supervised tasks , such as set-input problems solved by summary networks , meta generation ( Hong et al. , 2020c ; Antoniou et al. , 2017 ) , metric-based few-shot classification ( Snell et al. , 2017 ) , and learning statistics for approximate Bayesian computation ( Chen et al. , 2021 ) . Since our plug-and-play framework can be applied to many meta-learning problems , this paper further instantiates it to the cases of metric-based few-shot classification and implicit meta generative modeling . We summarize our contributions as follows : ( 1 ) We formulate the learning of summary network as the distribution approximation problem by minimizing the distance between distribution over data points and another distribution over global prototypes . ( 2 ) We leverage optimal transport distance to measure the distance between the distributions for use in a joint learning algorithm . ( 3 ) We apply our method to metric-based few-shot classification and construct implicit meta generative model , where summary network is used to extract the summary statistics from set and optimized by the OT loss . Experiments on several meta-learning tasks demonstrate that introducing the OT loss into existing summary networks can extract more effective set representations for the corresponding tasks , which can be also integrated into existing few-shot classification and GAN frameworks , producing a new way to learn the set ’ summary statistics applicable to many applications . 2 BACKGROUND . 2.1 SUMMARY NETWORKS FOR SET-STRUCTURED INPUT . To deal with the set-structured input Dj = { xj,1 : Nj } and satisfy the permutation invariance in set , a remarkably simple but effective summary network is to perform pooling over embedding vectors extracted from the elements of a set . More formally , Sφ ( Dj ) = gφ2 ( pool ( { fφ1 ( xj1 ) , . . . , fφ1 ( xjNj ) } ) ) , ( 1 ) where fφ1 ( · ) acts on each element of a set and gφ2 ( pool ( · ) ) aggregates these encoded features and produces desired output , and φ = { φ1 , φ2 } denotes the parameters of the summary network . Most network architectures for set-structured data follow this structure ; see more details from the previous works ( Lee et al. , 2019 ; Zaheer et al. , 2017 ; Edwards & Storkey , 2017 ; Maron et al. , 2020 ) . 2.2 OPTIMAL TRANSPORT . Although OT has a rich theory , we limit our discussion to OT for discrete distributions and refer the reader to Peyré & Cuturi ( 2019 ) for more details . Let us consider p and q as two discrete probability distributions on the arbitrary space X ⊆ Rd , which can be formulated as p = ∑n i=1 aiδxi and q = ∑m j=1 bjδyj . In this case , a ∈ Σn and b ∈ Σm , where Σm denotes the probability simplex of Rm . The OT distance between a and b is defined as OT ( a , b ) = min T∈U ( a , b ) 〈T , C〉 , ( 2 ) where 〈· , ·〉means the Frobenius dot-product ; C ∈ Rn×m≥0 is the transport cost function with element Cij = C ( xi , yj ) ; T ∈ Rn×m > 0 denotes the doubly stochastic transport probability matrix such that U ( a , b ) : = { T | ∑n i Tij = bj , ∑m j Tij = ai } . To relax the time-consuming problem when opti- mising the OT distance , Cuturi ( 2013 ) introduced the entropic regularization , H = − ∑ ij Tij lnTij , leading to the widely-used Sinkhorn algorithm for discrete OT problems . 3 PROPOSED FRAMEWORK . In meta-learning , given a meta-distribution pM of tasks , the marginal distribution pj of task j is sampled from pM for j ∈ J , where J denotes a finite set of indices . For example , we can sample pj from pM with probability 1J when pM is uniform over a finite number of marginals . During meta-training , direct access to the distribution of interest pj is usually not available . Instead , we will observe a set of data points Dj = { xji } Nji=1 , which consists of Nj i.i.d samples from pj over R d. We can roughly treat the meta-learning problems as the set-input tasks , where dataset Dj from pj corresponds to an input set . To learn more representative features from related but unseen sets in meta-learning problems , we adopt the summary network as the encoder to extract set representations and improve it by introducing the OT loss and global prototypes , providing many applications . Besides , we also provide the applications to metric-based few-shot classification and implicit generative framework by assimilating the summary statistics . Below we describe our model in detail . 3.1 LEARNING GLOBAL PROTOTYPES AND SET REPRESENTATION VIA OPTIMAL TRANSPORT . Given J sets from meta-distribution pM , we can represent each set Dj from meta-distribution pM as an empirical distribution over Nj samples on the original data space , formulated as Pj = ∑Nj i=1 1 Nj δxji , xji ∈ Rd . ( 3 ) Since all sets ( distributions ) draw from meta-distribution pM are closely related , it is reasonable to assume that these sets share some statistical information . Motivated by dictionary learning and topic modeling , we define the shared information as the learnable global prototype matrix B = { βk } ∈ Rd×K , where K means the number of global prototypes and βk denotes the distributed representation of the k-th prototype in the same space of the observed data points ( i.e. , “ topic ” in topic modeling ) . Given the global prototype matrix B , each set j can be represented with a K-dimensional weight vector hj ∈ Σk ( e.g. , “ topic proportion ” in topic modeling ) , where hjk indicates the weight of the prototype βk for set j . Therefore , we can represent set Dj with another distribution Qj on global prototypes β1 : K , defined as Qj = ∑K k=1 hjkδβk , βk ∈ Rd , ( 4 ) where hj can be viewed as set representation for describing set j . Since set j can be represented as Qj and Pj , we propose to learn set-specific representation hj andK global prototypes B by pushing Qj towards Pj : OT ( Pj , Qj ) = min B , hj 〈T , C〉 def.= Nj∑ i K∑ k CikTik , ( 5 ) where C ∈ RNj×K≥0 is the transport cost matrix . In this paper , to measure the distance between data point xji in set j and prototype βk , unless specified otherwise , we specify the construction of C as Cik = 1 − cos ( xji , βk ) , for providing an upper-bounded positive similarity metric . Besides , the transport probability matrix T ∈ RNj×K > 0 should satisfy Π ( a , b ) : = { T | T1K = a , T > 1Nj = b } with Tik = T ( xji , βk ) , where a = [ 1Nj ] ∈ Σ Nj and b = [ hjk ] ∈ ΣK denote the respective probability vectors for distribution Pj in Equation 3 and Qj Equation 4 . Since hj should be invariant to the permutations of the samples in set j , we adopt summary network to encode the set of Nj points . For unsupervised tasks , taking the summary network in Equation 1 as the example , we can directly add a Softmax activation function into Sφ to enforce the simplex constraint in set representation hj , denoted as hj = Softmax ( Sφ ( Dj ) ) . As shown in Fig . 1 , given J sets , to learn the global prototype matrix B and the summary network parameterized by φ , we adopt the entropic constraint ( Cuturi , 2013 ) and define the average OT loss for all training sets as LOT = min B , φ 1 J J∑ j=1 Nj∑ i K∑ k CikTik − Nj∑ i K∑ k −TiklnTik = min B , φ 1 J J∑ j=1 ( OT ( Pj , Qj ) ) , ( 6 ) where is a hyper-parameter for entropic constraint . Algorithm 1 describes the workflow of OT distance for improving summary network under unsupervised tasks . For supervised tasks , set j is denoted as Dj = { xj,1 : Nj , yj } , where yj is the ground-truth output determined by specific task . As hj is a normalized weight vector , directly using it to realize the corresponding task may be undesired . Denoting zj=pool ( { fφ1 ( xj1 ) , . . . , fφ1 ( xjNj ) } ) , we project it to the following vectors : hj = fe ( zj ) , ŷj = fλ ( zj ) , ( 7 ) where hj and ŷj are responsible for the OT and task-specific losses , respectively . Now the summary network parameters φ̃= { e , λ , φ1 } and global prototypes B can be learned by jointly optimizing the task-specific loss ( computed by ŷj and yj ) and OT loss in Equation F. In summary , minimizing the OT distance between the prototype distribution Qj and empirical distribution Pj provides a principled and unsupervised way to encourage the summary network to capture the set ’ s summary statistics , where a suite of summary networks can be integrated into this plug-and-play framework . Therefore , it realizes efficient learning from new sets for both unsupervised and supervised tasks . | The paper provides an optimal transport (OT) based algorithm for improving existing summary networks for learning from set-structured data. The proposed approach views each set as a distribution over a set of global prototypes. To learn the distribution over the global prototypes, the proposed approach minimizes the OT distance to the set's empirical distribution over data points. Empirical results demonstrate that the proposed framework improves upon the existing summary network approaches as well as metric-based few-shot classification and generative modeling applications. | SP:9b1de124c06588bfe9b703a413a3ffa915729ce4 |
Learning Prototype-oriented Set Representations for Meta-Learning | 1 INTRODUCTION . Machine learning models , such as convolutional neural networks for images ( He et al . 2016 ) and recurrent neural networks for sequential data ( Sutskever et al . 2014 ) , have achieved great success in taking advantage of the structure in the input space ( Maron et al . 2020 ) . However , extending them to handle unstructured input in the form of sets , where a set can be defined as an unordered collections of elements , is not trivial and has recently attracted increasing attention ( Jurewicz & StrømbergDerczynski , 2021 ) . Set-input is relevant to a range of problems , such as understanding a scene formed of a set of objects ( Eslami et al . 2016 ) , classifying an object composed of a set of 3D points ( Qi et al . 2017 ) , and estimating summary statistics from a set of data points for implicit generative models ( Chen et al . 2021 ) . Moreover , many meta-learning problems , which process different but related tasks , may also be viewed as set-input tasks ( Lee et al. , 2019 ) , where an input set corresponds to the training dataset of a single task . Therefore , we broaden the scope of set related applications by including traditional set-structured input problems and most meta-learning problems . Both of them aim to improve the quick adaptation ability for unseen sets , even though the latter is more difficult because of limited samples or the occurrence of new categories for classification problems . For a set-input , the output of the model must not change if the elements of the input set are reordered , which entails permutation invariance of the model . To enforce this property , multiple researchers have recently focused on designing different network architectures , which can be referred to as a summary network for compressing the set-structured data into a fixed-size output . For example , the prominent works of Zaheer et al . ( 2017 ) and Edwards & Storkey ( 2017 ) combined the standard feed-forward neural networks with a set-pooling layer , which have been proven to be universal approximators of continuous permutation invariant functions ( Zaheer et al . 2017 ) . Lee et al . ( 2019 ) further introduced Set Transformer to encode and aggregate the features within the set using the multi-head attention mechanism . Maron et al . ( 2020 ) designed deep models and presented a principled approach to learn sets of symmetric elements . Despite the effectiveness and recent popularity of these works in set-input problems , there are several shortcomings for existing summary networks , which could hinder their applicability and further extensions : 1 ) The parameters of the summary network are typically optimized by a task-specific loss function , which could limit the models ’ flexibility . 2 ) A desideratum of a summary network is to extract set features , which have enough ability to represent the summary statistics of the input set and thus benefit the corresponding set-specific task ; but for many existing summary networks , there is no clear evidence or constraint that the outputs of the summary network could describe the set ’ s summary statistics well . These limits still remain even using the recent more carefully designed summary networks , while sets with limited samples further exacerbate the problem . To address the above shortcomings , we present a novel and generic approach to improve the summary networks for set-structured data and adapt them to meta-learning problems . Motivated by meta-learning that aims to extract transferable patterns useful for all related tasks , we assume that there are K global prototypes ( i.e. , centers ) among the collection of related sets , and each prototype or center is encouraged to capture the statistical information shared by those sets , similar to the “ topic ” in topic modeling or “ dictionary atom ” in dictionary learning . Specifically , for the jth set , we consider it as a discrete distribution Pj over all the samples within the set ( in data or feature space ) . At the same time , we also represent this set with another distribution Qj ( in the same space with Pj ) , supported on K global prototypes with a K-dimensional set representation hj . Since hj measures the importance of global prototypes for set j , it can be treated as the prototype proportion for summarizing the salient characteristics of set j . Moreover , the existing summary networks can be adopted to encode set j as hj for their desired property of permutation invariance . In this way , we can formulate the learning of summary networks as the process of learning a Pj to be as close to Qj as possible , a process facilitated by leveraging the optimal transport ( OT ) distance ( Peyré & Cuturi 2019 ) . Therefore , the global prototypes and summary network can be learned by jointly optimizing the task-specific loss and OT distance between Pj and Qj in an end-to-end manner . We can refer to this method as prototype-oriented framework for meta-learning , which is applicable to a range of unsupervised and supervised tasks , such as set-input problems solved by summary networks , meta generation ( Hong et al. , 2020c ; Antoniou et al. , 2017 ) , metric-based few-shot classification ( Snell et al. , 2017 ) , and learning statistics for approximate Bayesian computation ( Chen et al. , 2021 ) . Since our plug-and-play framework can be applied to many meta-learning problems , this paper further instantiates it to the cases of metric-based few-shot classification and implicit meta generative modeling . We summarize our contributions as follows : ( 1 ) We formulate the learning of summary network as the distribution approximation problem by minimizing the distance between distribution over data points and another distribution over global prototypes . ( 2 ) We leverage optimal transport distance to measure the distance between the distributions for use in a joint learning algorithm . ( 3 ) We apply our method to metric-based few-shot classification and construct implicit meta generative model , where summary network is used to extract the summary statistics from set and optimized by the OT loss . Experiments on several meta-learning tasks demonstrate that introducing the OT loss into existing summary networks can extract more effective set representations for the corresponding tasks , which can be also integrated into existing few-shot classification and GAN frameworks , producing a new way to learn the set ’ summary statistics applicable to many applications . 2 BACKGROUND . 2.1 SUMMARY NETWORKS FOR SET-STRUCTURED INPUT . To deal with the set-structured input Dj = { xj,1 : Nj } and satisfy the permutation invariance in set , a remarkably simple but effective summary network is to perform pooling over embedding vectors extracted from the elements of a set . More formally , Sφ ( Dj ) = gφ2 ( pool ( { fφ1 ( xj1 ) , . . . , fφ1 ( xjNj ) } ) ) , ( 1 ) where fφ1 ( · ) acts on each element of a set and gφ2 ( pool ( · ) ) aggregates these encoded features and produces desired output , and φ = { φ1 , φ2 } denotes the parameters of the summary network . Most network architectures for set-structured data follow this structure ; see more details from the previous works ( Lee et al. , 2019 ; Zaheer et al. , 2017 ; Edwards & Storkey , 2017 ; Maron et al. , 2020 ) . 2.2 OPTIMAL TRANSPORT . Although OT has a rich theory , we limit our discussion to OT for discrete distributions and refer the reader to Peyré & Cuturi ( 2019 ) for more details . Let us consider p and q as two discrete probability distributions on the arbitrary space X ⊆ Rd , which can be formulated as p = ∑n i=1 aiδxi and q = ∑m j=1 bjδyj . In this case , a ∈ Σn and b ∈ Σm , where Σm denotes the probability simplex of Rm . The OT distance between a and b is defined as OT ( a , b ) = min T∈U ( a , b ) 〈T , C〉 , ( 2 ) where 〈· , ·〉means the Frobenius dot-product ; C ∈ Rn×m≥0 is the transport cost function with element Cij = C ( xi , yj ) ; T ∈ Rn×m > 0 denotes the doubly stochastic transport probability matrix such that U ( a , b ) : = { T | ∑n i Tij = bj , ∑m j Tij = ai } . To relax the time-consuming problem when opti- mising the OT distance , Cuturi ( 2013 ) introduced the entropic regularization , H = − ∑ ij Tij lnTij , leading to the widely-used Sinkhorn algorithm for discrete OT problems . 3 PROPOSED FRAMEWORK . In meta-learning , given a meta-distribution pM of tasks , the marginal distribution pj of task j is sampled from pM for j ∈ J , where J denotes a finite set of indices . For example , we can sample pj from pM with probability 1J when pM is uniform over a finite number of marginals . During meta-training , direct access to the distribution of interest pj is usually not available . Instead , we will observe a set of data points Dj = { xji } Nji=1 , which consists of Nj i.i.d samples from pj over R d. We can roughly treat the meta-learning problems as the set-input tasks , where dataset Dj from pj corresponds to an input set . To learn more representative features from related but unseen sets in meta-learning problems , we adopt the summary network as the encoder to extract set representations and improve it by introducing the OT loss and global prototypes , providing many applications . Besides , we also provide the applications to metric-based few-shot classification and implicit generative framework by assimilating the summary statistics . Below we describe our model in detail . 3.1 LEARNING GLOBAL PROTOTYPES AND SET REPRESENTATION VIA OPTIMAL TRANSPORT . Given J sets from meta-distribution pM , we can represent each set Dj from meta-distribution pM as an empirical distribution over Nj samples on the original data space , formulated as Pj = ∑Nj i=1 1 Nj δxji , xji ∈ Rd . ( 3 ) Since all sets ( distributions ) draw from meta-distribution pM are closely related , it is reasonable to assume that these sets share some statistical information . Motivated by dictionary learning and topic modeling , we define the shared information as the learnable global prototype matrix B = { βk } ∈ Rd×K , where K means the number of global prototypes and βk denotes the distributed representation of the k-th prototype in the same space of the observed data points ( i.e. , “ topic ” in topic modeling ) . Given the global prototype matrix B , each set j can be represented with a K-dimensional weight vector hj ∈ Σk ( e.g. , “ topic proportion ” in topic modeling ) , where hjk indicates the weight of the prototype βk for set j . Therefore , we can represent set Dj with another distribution Qj on global prototypes β1 : K , defined as Qj = ∑K k=1 hjkδβk , βk ∈ Rd , ( 4 ) where hj can be viewed as set representation for describing set j . Since set j can be represented as Qj and Pj , we propose to learn set-specific representation hj andK global prototypes B by pushing Qj towards Pj : OT ( Pj , Qj ) = min B , hj 〈T , C〉 def.= Nj∑ i K∑ k CikTik , ( 5 ) where C ∈ RNj×K≥0 is the transport cost matrix . In this paper , to measure the distance between data point xji in set j and prototype βk , unless specified otherwise , we specify the construction of C as Cik = 1 − cos ( xji , βk ) , for providing an upper-bounded positive similarity metric . Besides , the transport probability matrix T ∈ RNj×K > 0 should satisfy Π ( a , b ) : = { T | T1K = a , T > 1Nj = b } with Tik = T ( xji , βk ) , where a = [ 1Nj ] ∈ Σ Nj and b = [ hjk ] ∈ ΣK denote the respective probability vectors for distribution Pj in Equation 3 and Qj Equation 4 . Since hj should be invariant to the permutations of the samples in set j , we adopt summary network to encode the set of Nj points . For unsupervised tasks , taking the summary network in Equation 1 as the example , we can directly add a Softmax activation function into Sφ to enforce the simplex constraint in set representation hj , denoted as hj = Softmax ( Sφ ( Dj ) ) . As shown in Fig . 1 , given J sets , to learn the global prototype matrix B and the summary network parameterized by φ , we adopt the entropic constraint ( Cuturi , 2013 ) and define the average OT loss for all training sets as LOT = min B , φ 1 J J∑ j=1 Nj∑ i K∑ k CikTik − Nj∑ i K∑ k −TiklnTik = min B , φ 1 J J∑ j=1 ( OT ( Pj , Qj ) ) , ( 6 ) where is a hyper-parameter for entropic constraint . Algorithm 1 describes the workflow of OT distance for improving summary network under unsupervised tasks . For supervised tasks , set j is denoted as Dj = { xj,1 : Nj , yj } , where yj is the ground-truth output determined by specific task . As hj is a normalized weight vector , directly using it to realize the corresponding task may be undesired . Denoting zj=pool ( { fφ1 ( xj1 ) , . . . , fφ1 ( xjNj ) } ) , we project it to the following vectors : hj = fe ( zj ) , ŷj = fλ ( zj ) , ( 7 ) where hj and ŷj are responsible for the OT and task-specific losses , respectively . Now the summary network parameters φ̃= { e , λ , φ1 } and global prototypes B can be learned by jointly optimizing the task-specific loss ( computed by ŷj and yj ) and OT loss in Equation F. In summary , minimizing the OT distance between the prototype distribution Qj and empirical distribution Pj provides a principled and unsupervised way to encourage the summary network to capture the set ’ s summary statistics , where a suite of summary networks can be integrated into this plug-and-play framework . Therefore , it realizes efficient learning from new sets for both unsupervised and supervised tasks . | This work introduces a straightforward development for set representation learning in the meta-learning context based on the intuition that the sets encountered in real-world meta-learning tasks tend to have common attributes, as illustrated in Figure 1. The idea is to jointly learn these common attributes $\beta_{[1:K]}$, referred to as 'global centres' or 'global prototypes', and the parameters of a summary network using an Optimal Transport derived loss function that compares the empirical set distribution $P_j$ with a set summary distribution over the global centres/prototypes, $Q_j=\sum_{k=1}^K h_{jk}\delta_{\beta_k}$. The idea is elegant in its simplicity and effectiveness. | SP:9b1de124c06588bfe9b703a413a3ffa915729ce4 |
The Role of Permutation Invariance in Linear Mode Connectivity of Neural Networks | In this paper , we conjecture that if the permutation invariance of neural networks is taken into account , SGD solutions will likely have no barrier in the linear interpolation between them . Although it is a bold conjecture , we show how extensive empirical attempts fall short of refuting it . We further provide a preliminary theoretical result to support our conjecture . Our conjecture has implications for lottery ticket hypothesis , distributed training and ensemble methods . The source code is available at https : //github.com/AnonymousMLSubmission/PermutationInvariance . 1 INTRODUCTION . Understanding the loss landscape of deep neural networks has been the subject of many studies due to its close connections to optimization and generalization ( Li et al. , 2017 ; Mei et al. , 2018 ; Geiger et al. , 2019 ; Nguyen et al. , 2018 ; Fort et al. , 2019 ; Baldassi et al. , 2020 ) . Empirical observations suggest that loss landscape of deep networks has many minima ( Keskar et al. , 2017 ; Draxler et al. , 2018 ; Zhang et al. , 2017 ) . One reason behind the abundance of minima is over-parametrization . Over-parametrized networks have enough capacity to present different functions that behave similarly on the training data but vastly different on other inputs ( Neyshabur et al. , 2017 ; Nguyen et al. , 2018 ; Li et al. , 2018 ; Liu et al. , 2020 ) . Another contributing factor is the existence of scale and permutation invariances which allows the same function to be represented with many different parameter values of the same network and imposes a counter-intuitive geometry on the loss landscape ( Neyshabur et al. , 2015 ; Brea et al. , 2019a ) . Previous work study the relationship between different minima found by SGD and establish that they are connected by a path of non-increasing loss ; however , they are not connected by a linear path ( Freeman & Bruna , 2016 ; Draxler et al. , 2018 ; Garipov et al. , 2018 ) . This phenomenon is often referred to as mode connectivity ( Garipov et al. , 2018 ) and the loss increase on the path between two solutions is often referred to as ( energy ) barrier ( Draxler et al. , 2018 ) . Understanding linear mode connectivity ( LMC ) is highly motivated by several direct conceptual and practical implications from pruning and sparse training to distributed optimization and ensemble methods . The relationship between LMC and pruning was established by Frankle et al . ( 2020 ) where they showed the correspondence between LMC and the well-known lottery ticket hypothesis ( LTH ) ( Frankle & Carbin , 2019 ) . In short , LTH conjectures that neural networks contain sparse subnetworks that can be trained in isolation , from initialization , or early in training to achieve comparable test accuracy . Frankle et al . ( 2020 ) showed that solutions that are linearly connected with no barrier have the same lottery ticket . They further discuss how linear-connectivity is associated with stability of SGD . This view suggests that SGD solutions that are linearly connected with no barrier can be thought of as being in the same basin of the loss landscape and once SGD converges to a basin , it shows a stable behavior inside the basin1 . Because of the direct correspondence between LMC and LTH , any understanding of LMC , has implications for LTH , stability of SGD and pruning techniques . Linear mode connectivity has also direct implications for ensemble methods and distributed training . Ensemble methods highly depend on an understanding of the loss landscape and being able to sample from solutions . Better understanding of mode connectivity has been shown to be essential in devising better ensemble methods ( Garipov et al. , 2018 ) . Linear mode connectivity between solutions or checkpoints also allows for weight averaging techniques for distributed optimization to be used as effectively in deep learning as convex optimization ( Scaman et al. , 2019 ) . 1This notion of basin is also consistent with the definition proposed by Neyshabur et al . ( 2020 ) . In this paper , we conjecture that by taking permutation invariance into account , the loss landscape can be simplified significantly resulting in linear mode connectivity between SGD solutions . We investigate this conjecture both theoretically and empirically through extensive experiments . We show how our attempts fall short of refuting this hypothesis and end up as supporting evidence for it ( see Figure 1 ) . We believe our conjecture sheds light into the structure of loss landscape and could lead to practical implications for the aforementioned areas . Contributions . This paper makes the following contributions : • We study linear mode connectivity ( LMC ) between solutions trained from different initializations and investigate how it is affected by choices such as width , depth and task difficulty for fully connected and convolutional networks ( Section 2 ) . • We introduce our main conjecture in Section 3 : If invariances are taken into account , there will likely be no barrier on the linear interpolation of SGD solutions ( see the left panel of Figure 1 ) . • By investigating the conjecture theoretically , we prove that it holds for a wide enough fully-connected network with one hidden layer at random initialization ( Section 3 ) . • In Section 4 , we provide strong empirical evidence in support of our conjecture . To overcome the computational challenge of directly evaluating the hypothesis empirically , which requires searching in the space of all possible permutations , we propose an alternative approach . We consider a set of solutions corresponding to random permutations of a single fixed SGD solution ( our model ) and show several empirical evidences suggesting our model is a good approximation for all SGD solutions ( real world ) with different random seeds ( see the middle and right panel of Figure 1 ) . Further related work . Permutation symmetry of neurons in every layer results in multiple equivalent minima connected via saddle points . Few studies investigate the role of these symmetries in the context of connectivity of different basins . Given a network with L layers of minimal widths r∗1 , ... , r ∗ L−1 that reaches zero-loss minima at r1 ! , ... , rL−1 ! isolated points ( permutations of one another ) , Şimşek et al . ( 2021 ) showed that adding one extra neuron to each layer is sufficient to connect all these previously discrete minima into a single manifold . Fukumizu & Amari ( 2000 ) prove that a point corresponding to the global minimum of a smaller model can be a local minimum or a saddle point of the larger model . Brea et al . ( 2019b ) find smooth paths between equivalent global minima that lead through a permutation point , i.e. , where the input and output weight vectors of two neurons in the same hidden layer interchange . They describe a method to permute all neuron indices in the same layer at the same cost . Tatro et al . ( 2020 ) showed that aligning the neurons in two different neural networks makes it easier to find second order curves between them in the loss landscape where barriers are absent . 2 LOSS BARRIERS . In this section , we first give a formal definition for linear mode connectivity and study how it is affected by different factors such as network width , depth , and task difficulty for a variety of architectures . 2.1 DEFINITIONS . Let fθ ( · ) be a function presented by a neural network with parameter vector θ that includes all parameters and L ( θ ) be the any given loss ( e.g. , train or test error ) of fθ ( · ) . Let Eα ( θ1 , θ2 ) = L ( αθ1 + ( 1 − α ) θ2 ) , for α ∈ [ 0 , 1 ] be the loss of the network created by linearly interpolating between parameters of two networks fθ1 ( · ) and fθ2 ( · ) . The loss barrier B ( θ1 , θ2 ) along the linear path between θ1 and θ2 is defined as the highest difference between the loss occurred when linearly connecting two points θ1 , θ2 and linear interpolation of the loss values at each of them : B ( θ1 , θ2 ) = sup α [ L ( αθ1 + ( 1− α ) θ2 ) ] − [ αL ( θ1 ) + ( 1− α ) L ( θ2 ) ] . ( 1 ) The above definition differs from what was proposed by Frankle et al . ( 2020 ) in that they used 0.5L ( θ1 ) + 0.5L ( θ2 ) instead of αL ( θ1 ) + ( 1 − α ) L ( θ2 ) in our definition . These definitions are the same if L ( θ1 ) = L ( θ2 ) . But if L ( θ1 ) , L ( θ2 ) are different , we find our definition to be more appropriate because it assigns no barrier value to a loss that is changing linearly between θ1 and θ2 . We say that two networks θ1 and θ2 are linear mode connected if the barrier between them along a linear path is ≈ 0 ( Frankle et al. , 2020 ) . It has been observed in the literature that any two minimizers of a deep network can be connected via a non-linear low-loss path ( Garipov et al. , 2018 ; Draxler et al. , 2018 ; Fort & Jastrzebski , 2019 ) . This work examines linear mode connectivity ( LMC ) between minima . Next , we empirically investigate the effect of task difficulty and choices such as architecture family , width and depth on LMC of SGD solutions . 2.2 EMPIRICAL INVESTIGATION : BARRIERS . In this section , we look into barriers between different SGD solutions on all combinations of four architecture families ( MLP ( Rosenblatt , 1961 ) , Shallow CNN ( Neyshabur , 2020 ) , ResNet ( He et al. , 2015 ) and VGG ( Simonyan & Zisserman , 2015 ) ) and four datasets ( MNIST ( LeCun & Cortes , 2010 ) , SVHN ( Netzer et al. , 2011 ) , CIFAR-10 ( Krizhevsky et al. , 2009 ) and CIFAR-100 ( Krizhevsky et al. , 2009 ) ) . The main motivation to use Shallow CNN is to move from fully connected layers ( MLP ) to convolutions . The main difference between Shallow CNN and VGG16 is depth and the main difference between ResNet18 and VGG16 is existence of residual connections . We empirically investigate how different factors such as architecture family , width , depth and task difficulty impact the barrier size2 . We refer to training loss barrier as barrier . For loss barriers on a test set see E.4 . For train and test errors see A.2 . Width : We evaluate the impact of width on the barrier size in Figure 2 . We note that for large values of width the barrier becomes small . This effect starts at lower width for simpler datasets such as MNIST and SVHN compared to CIFAR datasets . A closer look reveals that the barrier increases with width up to a point and beyond that increasing width leads to lower barrier size . This effect is reminiscent of the double descent phenomena ( Belkin et al. , 2019 ; Nakkiran et al. , 2019 ) . Checking the test error ( Figure 8 ) indicates that in our experiments the barrier peak happens at the same size that needed to fit the training data . This phenomena is observed for both fully-connected and convolutional architectures . MLP architectures hit their peak at a lower width compared to CNNs 2In all plots the barrier is evaluated using training loss across 5 different random pairs ( 10 random SGD solutions ) . For easier comparison between all figures , we report the train accuracy barrier in all barrier plots . In our experiments , we observed that evaluating the barrier at α = 1 2 is a reasonable surrogate for taking the supremum over α ( the difference is less than 10−4 ) . Therefore , to save computation , we report the barrier value at α = 1 2 . and a decreasing trend starts earlier . For ResNets the barrier size is saturated at a high value and does not change . The barrier value for VGG architecture on different datasets is also saturated at a high value and does not change by increasing the width . Such similar behavior observed for both ResNets and VGG architectures is due to the effect of depth as discussed in the next paragraph . Depth : We vary network depth in Figure 3 to evaluate its impact on the barrier between optimal solutions obtained from different initializations . For MLPs , we fix the layer width at 210 while adding identical layers as shown along the x-axis . We observe a fast and significant barrier increase as more layers are added . For VGG architecture family we observe significant barriers . This might be due to the effect of convolution or depth . In order to shed light on this observation , we use Shallow CNN ( Neyshabur , 2020 ) with only two convolutional layers . As can be seen in Figure 3 when Shallow CNN has two layers the barrier size is low , while keeping the layer width fixed at 210 and adding more layers increases the barrier size . For residual networks we also consider three ResNet architectures with 18 , 34 and 50 layers and observe the same barrier sizes as VGG for all these depth values . The main overall observation from depth experiments is that for both fully-connected and convolutional architectures , increasing depth increases the barrier size significantly so the effect of depth is not similar to width . This can also be attributed to the observations that deeper networks usually have a less smooth landscape ( Li et al. , 2017 ) . Task difficulty and architecture choice : In Figure 4 we look into the impact of the task difficulty provided by the dataset choice ( MNIST , SVHN , CIFAR-10 , CIFAR-100 , and ImageNet ( Deng et al. , 2009 ) ) and the architecture type ( one-layer MLP with 210 neurons , Shallow CNN with two convolutional layer and width of 210 , VGG-16 with batch-normalization , ResNet18 and ResNet50 ) . Each row in Figure 4a and Figure 4b shows the effect of task difficulty , e.g. , fixing the task to SVHN and moving from MLP to Shallow CNN gives lower test error hence lower barrier size . Each column also represents the effect of architecture on a specific dataset , e.g. , fixing the architecture to Shallow CNN and moving from CIFAR10 to CIFAR100 presents an increase in test error , hence increase in the barrier size . Although deep architectures like VGG16 and ResNet18 present low test error , the discussed effect of depth saturates their barrier at a high level . Figure 4c aggregates the correlation between test error and size of the barrier . For MLP and Shallow CNN we observe a high positive correlation between test error and barrier size across different datasets . Deeper networks ( VGGs , ResNets ) form a cluster in the top-left , with low test error and high barrier size . | The authors present a bold and thought-provoking conjecture: neural networks trained with SGD converge to the same low-loss basin, up to permutations of their hidden neurons. They go on to provide some limited but intriguing evidence in support of this conjecture. First, they prove that it holds for sufficiently wide one-hidden-layer neural networks at random initialization. Second, they empirically show that the loss barriers between different SGD solutions are similar in magnitude to the loss barriers between different permutations of a single SGD solution, across a range of models, tasks, and hyperparameters (e.g. width, depth). The paper also includes a number of experiments investigating the effects of width, depth, task, and architecture on loss barrier size. | SP:e2fa0bdff3b64951e002e8a16b1ca747b0f4e1d6 |
The Role of Permutation Invariance in Linear Mode Connectivity of Neural Networks | In this paper , we conjecture that if the permutation invariance of neural networks is taken into account , SGD solutions will likely have no barrier in the linear interpolation between them . Although it is a bold conjecture , we show how extensive empirical attempts fall short of refuting it . We further provide a preliminary theoretical result to support our conjecture . Our conjecture has implications for lottery ticket hypothesis , distributed training and ensemble methods . The source code is available at https : //github.com/AnonymousMLSubmission/PermutationInvariance . 1 INTRODUCTION . Understanding the loss landscape of deep neural networks has been the subject of many studies due to its close connections to optimization and generalization ( Li et al. , 2017 ; Mei et al. , 2018 ; Geiger et al. , 2019 ; Nguyen et al. , 2018 ; Fort et al. , 2019 ; Baldassi et al. , 2020 ) . Empirical observations suggest that loss landscape of deep networks has many minima ( Keskar et al. , 2017 ; Draxler et al. , 2018 ; Zhang et al. , 2017 ) . One reason behind the abundance of minima is over-parametrization . Over-parametrized networks have enough capacity to present different functions that behave similarly on the training data but vastly different on other inputs ( Neyshabur et al. , 2017 ; Nguyen et al. , 2018 ; Li et al. , 2018 ; Liu et al. , 2020 ) . Another contributing factor is the existence of scale and permutation invariances which allows the same function to be represented with many different parameter values of the same network and imposes a counter-intuitive geometry on the loss landscape ( Neyshabur et al. , 2015 ; Brea et al. , 2019a ) . Previous work study the relationship between different minima found by SGD and establish that they are connected by a path of non-increasing loss ; however , they are not connected by a linear path ( Freeman & Bruna , 2016 ; Draxler et al. , 2018 ; Garipov et al. , 2018 ) . This phenomenon is often referred to as mode connectivity ( Garipov et al. , 2018 ) and the loss increase on the path between two solutions is often referred to as ( energy ) barrier ( Draxler et al. , 2018 ) . Understanding linear mode connectivity ( LMC ) is highly motivated by several direct conceptual and practical implications from pruning and sparse training to distributed optimization and ensemble methods . The relationship between LMC and pruning was established by Frankle et al . ( 2020 ) where they showed the correspondence between LMC and the well-known lottery ticket hypothesis ( LTH ) ( Frankle & Carbin , 2019 ) . In short , LTH conjectures that neural networks contain sparse subnetworks that can be trained in isolation , from initialization , or early in training to achieve comparable test accuracy . Frankle et al . ( 2020 ) showed that solutions that are linearly connected with no barrier have the same lottery ticket . They further discuss how linear-connectivity is associated with stability of SGD . This view suggests that SGD solutions that are linearly connected with no barrier can be thought of as being in the same basin of the loss landscape and once SGD converges to a basin , it shows a stable behavior inside the basin1 . Because of the direct correspondence between LMC and LTH , any understanding of LMC , has implications for LTH , stability of SGD and pruning techniques . Linear mode connectivity has also direct implications for ensemble methods and distributed training . Ensemble methods highly depend on an understanding of the loss landscape and being able to sample from solutions . Better understanding of mode connectivity has been shown to be essential in devising better ensemble methods ( Garipov et al. , 2018 ) . Linear mode connectivity between solutions or checkpoints also allows for weight averaging techniques for distributed optimization to be used as effectively in deep learning as convex optimization ( Scaman et al. , 2019 ) . 1This notion of basin is also consistent with the definition proposed by Neyshabur et al . ( 2020 ) . In this paper , we conjecture that by taking permutation invariance into account , the loss landscape can be simplified significantly resulting in linear mode connectivity between SGD solutions . We investigate this conjecture both theoretically and empirically through extensive experiments . We show how our attempts fall short of refuting this hypothesis and end up as supporting evidence for it ( see Figure 1 ) . We believe our conjecture sheds light into the structure of loss landscape and could lead to practical implications for the aforementioned areas . Contributions . This paper makes the following contributions : • We study linear mode connectivity ( LMC ) between solutions trained from different initializations and investigate how it is affected by choices such as width , depth and task difficulty for fully connected and convolutional networks ( Section 2 ) . • We introduce our main conjecture in Section 3 : If invariances are taken into account , there will likely be no barrier on the linear interpolation of SGD solutions ( see the left panel of Figure 1 ) . • By investigating the conjecture theoretically , we prove that it holds for a wide enough fully-connected network with one hidden layer at random initialization ( Section 3 ) . • In Section 4 , we provide strong empirical evidence in support of our conjecture . To overcome the computational challenge of directly evaluating the hypothesis empirically , which requires searching in the space of all possible permutations , we propose an alternative approach . We consider a set of solutions corresponding to random permutations of a single fixed SGD solution ( our model ) and show several empirical evidences suggesting our model is a good approximation for all SGD solutions ( real world ) with different random seeds ( see the middle and right panel of Figure 1 ) . Further related work . Permutation symmetry of neurons in every layer results in multiple equivalent minima connected via saddle points . Few studies investigate the role of these symmetries in the context of connectivity of different basins . Given a network with L layers of minimal widths r∗1 , ... , r ∗ L−1 that reaches zero-loss minima at r1 ! , ... , rL−1 ! isolated points ( permutations of one another ) , Şimşek et al . ( 2021 ) showed that adding one extra neuron to each layer is sufficient to connect all these previously discrete minima into a single manifold . Fukumizu & Amari ( 2000 ) prove that a point corresponding to the global minimum of a smaller model can be a local minimum or a saddle point of the larger model . Brea et al . ( 2019b ) find smooth paths between equivalent global minima that lead through a permutation point , i.e. , where the input and output weight vectors of two neurons in the same hidden layer interchange . They describe a method to permute all neuron indices in the same layer at the same cost . Tatro et al . ( 2020 ) showed that aligning the neurons in two different neural networks makes it easier to find second order curves between them in the loss landscape where barriers are absent . 2 LOSS BARRIERS . In this section , we first give a formal definition for linear mode connectivity and study how it is affected by different factors such as network width , depth , and task difficulty for a variety of architectures . 2.1 DEFINITIONS . Let fθ ( · ) be a function presented by a neural network with parameter vector θ that includes all parameters and L ( θ ) be the any given loss ( e.g. , train or test error ) of fθ ( · ) . Let Eα ( θ1 , θ2 ) = L ( αθ1 + ( 1 − α ) θ2 ) , for α ∈ [ 0 , 1 ] be the loss of the network created by linearly interpolating between parameters of two networks fθ1 ( · ) and fθ2 ( · ) . The loss barrier B ( θ1 , θ2 ) along the linear path between θ1 and θ2 is defined as the highest difference between the loss occurred when linearly connecting two points θ1 , θ2 and linear interpolation of the loss values at each of them : B ( θ1 , θ2 ) = sup α [ L ( αθ1 + ( 1− α ) θ2 ) ] − [ αL ( θ1 ) + ( 1− α ) L ( θ2 ) ] . ( 1 ) The above definition differs from what was proposed by Frankle et al . ( 2020 ) in that they used 0.5L ( θ1 ) + 0.5L ( θ2 ) instead of αL ( θ1 ) + ( 1 − α ) L ( θ2 ) in our definition . These definitions are the same if L ( θ1 ) = L ( θ2 ) . But if L ( θ1 ) , L ( θ2 ) are different , we find our definition to be more appropriate because it assigns no barrier value to a loss that is changing linearly between θ1 and θ2 . We say that two networks θ1 and θ2 are linear mode connected if the barrier between them along a linear path is ≈ 0 ( Frankle et al. , 2020 ) . It has been observed in the literature that any two minimizers of a deep network can be connected via a non-linear low-loss path ( Garipov et al. , 2018 ; Draxler et al. , 2018 ; Fort & Jastrzebski , 2019 ) . This work examines linear mode connectivity ( LMC ) between minima . Next , we empirically investigate the effect of task difficulty and choices such as architecture family , width and depth on LMC of SGD solutions . 2.2 EMPIRICAL INVESTIGATION : BARRIERS . In this section , we look into barriers between different SGD solutions on all combinations of four architecture families ( MLP ( Rosenblatt , 1961 ) , Shallow CNN ( Neyshabur , 2020 ) , ResNet ( He et al. , 2015 ) and VGG ( Simonyan & Zisserman , 2015 ) ) and four datasets ( MNIST ( LeCun & Cortes , 2010 ) , SVHN ( Netzer et al. , 2011 ) , CIFAR-10 ( Krizhevsky et al. , 2009 ) and CIFAR-100 ( Krizhevsky et al. , 2009 ) ) . The main motivation to use Shallow CNN is to move from fully connected layers ( MLP ) to convolutions . The main difference between Shallow CNN and VGG16 is depth and the main difference between ResNet18 and VGG16 is existence of residual connections . We empirically investigate how different factors such as architecture family , width , depth and task difficulty impact the barrier size2 . We refer to training loss barrier as barrier . For loss barriers on a test set see E.4 . For train and test errors see A.2 . Width : We evaluate the impact of width on the barrier size in Figure 2 . We note that for large values of width the barrier becomes small . This effect starts at lower width for simpler datasets such as MNIST and SVHN compared to CIFAR datasets . A closer look reveals that the barrier increases with width up to a point and beyond that increasing width leads to lower barrier size . This effect is reminiscent of the double descent phenomena ( Belkin et al. , 2019 ; Nakkiran et al. , 2019 ) . Checking the test error ( Figure 8 ) indicates that in our experiments the barrier peak happens at the same size that needed to fit the training data . This phenomena is observed for both fully-connected and convolutional architectures . MLP architectures hit their peak at a lower width compared to CNNs 2In all plots the barrier is evaluated using training loss across 5 different random pairs ( 10 random SGD solutions ) . For easier comparison between all figures , we report the train accuracy barrier in all barrier plots . In our experiments , we observed that evaluating the barrier at α = 1 2 is a reasonable surrogate for taking the supremum over α ( the difference is less than 10−4 ) . Therefore , to save computation , we report the barrier value at α = 1 2 . and a decreasing trend starts earlier . For ResNets the barrier size is saturated at a high value and does not change . The barrier value for VGG architecture on different datasets is also saturated at a high value and does not change by increasing the width . Such similar behavior observed for both ResNets and VGG architectures is due to the effect of depth as discussed in the next paragraph . Depth : We vary network depth in Figure 3 to evaluate its impact on the barrier between optimal solutions obtained from different initializations . For MLPs , we fix the layer width at 210 while adding identical layers as shown along the x-axis . We observe a fast and significant barrier increase as more layers are added . For VGG architecture family we observe significant barriers . This might be due to the effect of convolution or depth . In order to shed light on this observation , we use Shallow CNN ( Neyshabur , 2020 ) with only two convolutional layers . As can be seen in Figure 3 when Shallow CNN has two layers the barrier size is low , while keeping the layer width fixed at 210 and adding more layers increases the barrier size . For residual networks we also consider three ResNet architectures with 18 , 34 and 50 layers and observe the same barrier sizes as VGG for all these depth values . The main overall observation from depth experiments is that for both fully-connected and convolutional architectures , increasing depth increases the barrier size significantly so the effect of depth is not similar to width . This can also be attributed to the observations that deeper networks usually have a less smooth landscape ( Li et al. , 2017 ) . Task difficulty and architecture choice : In Figure 4 we look into the impact of the task difficulty provided by the dataset choice ( MNIST , SVHN , CIFAR-10 , CIFAR-100 , and ImageNet ( Deng et al. , 2009 ) ) and the architecture type ( one-layer MLP with 210 neurons , Shallow CNN with two convolutional layer and width of 210 , VGG-16 with batch-normalization , ResNet18 and ResNet50 ) . Each row in Figure 4a and Figure 4b shows the effect of task difficulty , e.g. , fixing the task to SVHN and moving from MLP to Shallow CNN gives lower test error hence lower barrier size . Each column also represents the effect of architecture on a specific dataset , e.g. , fixing the architecture to Shallow CNN and moving from CIFAR10 to CIFAR100 presents an increase in test error , hence increase in the barrier size . Although deep architectures like VGG16 and ResNet18 present low test error , the discussed effect of depth saturates their barrier at a high level . Figure 4c aggregates the correlation between test error and size of the barrier . For MLP and Shallow CNN we observe a high positive correlation between test error and barrier size across different datasets . Deeper networks ( VGGs , ResNets ) form a cluster in the top-left , with low test error and high barrier size . | This paper studies the optimization landscape of neural networks. In particular, this paper touches on the question of whether two local minima of the optimization landscape are connected by a “line” of linearly-interpolated neural networks. To probe this question, this paper evaluates the loss gap (so called “barrier”) between two local minima and their linear interpolation. The finding is that this barrier is non-zero. Next, the authors conjecture that if one takes into account permutation invariance of a local minimum, the barrier should be reduced to zero. To support this conjecture, the authors considered a simulated annealing algorithm to search for permutations of the weight matrices, to show that the barrier reduces after applying the simulated annealing algorithm | SP:e2fa0bdff3b64951e002e8a16b1ca747b0f4e1d6 |
The Role of Permutation Invariance in Linear Mode Connectivity of Neural Networks | In this paper , we conjecture that if the permutation invariance of neural networks is taken into account , SGD solutions will likely have no barrier in the linear interpolation between them . Although it is a bold conjecture , we show how extensive empirical attempts fall short of refuting it . We further provide a preliminary theoretical result to support our conjecture . Our conjecture has implications for lottery ticket hypothesis , distributed training and ensemble methods . The source code is available at https : //github.com/AnonymousMLSubmission/PermutationInvariance . 1 INTRODUCTION . Understanding the loss landscape of deep neural networks has been the subject of many studies due to its close connections to optimization and generalization ( Li et al. , 2017 ; Mei et al. , 2018 ; Geiger et al. , 2019 ; Nguyen et al. , 2018 ; Fort et al. , 2019 ; Baldassi et al. , 2020 ) . Empirical observations suggest that loss landscape of deep networks has many minima ( Keskar et al. , 2017 ; Draxler et al. , 2018 ; Zhang et al. , 2017 ) . One reason behind the abundance of minima is over-parametrization . Over-parametrized networks have enough capacity to present different functions that behave similarly on the training data but vastly different on other inputs ( Neyshabur et al. , 2017 ; Nguyen et al. , 2018 ; Li et al. , 2018 ; Liu et al. , 2020 ) . Another contributing factor is the existence of scale and permutation invariances which allows the same function to be represented with many different parameter values of the same network and imposes a counter-intuitive geometry on the loss landscape ( Neyshabur et al. , 2015 ; Brea et al. , 2019a ) . Previous work study the relationship between different minima found by SGD and establish that they are connected by a path of non-increasing loss ; however , they are not connected by a linear path ( Freeman & Bruna , 2016 ; Draxler et al. , 2018 ; Garipov et al. , 2018 ) . This phenomenon is often referred to as mode connectivity ( Garipov et al. , 2018 ) and the loss increase on the path between two solutions is often referred to as ( energy ) barrier ( Draxler et al. , 2018 ) . Understanding linear mode connectivity ( LMC ) is highly motivated by several direct conceptual and practical implications from pruning and sparse training to distributed optimization and ensemble methods . The relationship between LMC and pruning was established by Frankle et al . ( 2020 ) where they showed the correspondence between LMC and the well-known lottery ticket hypothesis ( LTH ) ( Frankle & Carbin , 2019 ) . In short , LTH conjectures that neural networks contain sparse subnetworks that can be trained in isolation , from initialization , or early in training to achieve comparable test accuracy . Frankle et al . ( 2020 ) showed that solutions that are linearly connected with no barrier have the same lottery ticket . They further discuss how linear-connectivity is associated with stability of SGD . This view suggests that SGD solutions that are linearly connected with no barrier can be thought of as being in the same basin of the loss landscape and once SGD converges to a basin , it shows a stable behavior inside the basin1 . Because of the direct correspondence between LMC and LTH , any understanding of LMC , has implications for LTH , stability of SGD and pruning techniques . Linear mode connectivity has also direct implications for ensemble methods and distributed training . Ensemble methods highly depend on an understanding of the loss landscape and being able to sample from solutions . Better understanding of mode connectivity has been shown to be essential in devising better ensemble methods ( Garipov et al. , 2018 ) . Linear mode connectivity between solutions or checkpoints also allows for weight averaging techniques for distributed optimization to be used as effectively in deep learning as convex optimization ( Scaman et al. , 2019 ) . 1This notion of basin is also consistent with the definition proposed by Neyshabur et al . ( 2020 ) . In this paper , we conjecture that by taking permutation invariance into account , the loss landscape can be simplified significantly resulting in linear mode connectivity between SGD solutions . We investigate this conjecture both theoretically and empirically through extensive experiments . We show how our attempts fall short of refuting this hypothesis and end up as supporting evidence for it ( see Figure 1 ) . We believe our conjecture sheds light into the structure of loss landscape and could lead to practical implications for the aforementioned areas . Contributions . This paper makes the following contributions : • We study linear mode connectivity ( LMC ) between solutions trained from different initializations and investigate how it is affected by choices such as width , depth and task difficulty for fully connected and convolutional networks ( Section 2 ) . • We introduce our main conjecture in Section 3 : If invariances are taken into account , there will likely be no barrier on the linear interpolation of SGD solutions ( see the left panel of Figure 1 ) . • By investigating the conjecture theoretically , we prove that it holds for a wide enough fully-connected network with one hidden layer at random initialization ( Section 3 ) . • In Section 4 , we provide strong empirical evidence in support of our conjecture . To overcome the computational challenge of directly evaluating the hypothesis empirically , which requires searching in the space of all possible permutations , we propose an alternative approach . We consider a set of solutions corresponding to random permutations of a single fixed SGD solution ( our model ) and show several empirical evidences suggesting our model is a good approximation for all SGD solutions ( real world ) with different random seeds ( see the middle and right panel of Figure 1 ) . Further related work . Permutation symmetry of neurons in every layer results in multiple equivalent minima connected via saddle points . Few studies investigate the role of these symmetries in the context of connectivity of different basins . Given a network with L layers of minimal widths r∗1 , ... , r ∗ L−1 that reaches zero-loss minima at r1 ! , ... , rL−1 ! isolated points ( permutations of one another ) , Şimşek et al . ( 2021 ) showed that adding one extra neuron to each layer is sufficient to connect all these previously discrete minima into a single manifold . Fukumizu & Amari ( 2000 ) prove that a point corresponding to the global minimum of a smaller model can be a local minimum or a saddle point of the larger model . Brea et al . ( 2019b ) find smooth paths between equivalent global minima that lead through a permutation point , i.e. , where the input and output weight vectors of two neurons in the same hidden layer interchange . They describe a method to permute all neuron indices in the same layer at the same cost . Tatro et al . ( 2020 ) showed that aligning the neurons in two different neural networks makes it easier to find second order curves between them in the loss landscape where barriers are absent . 2 LOSS BARRIERS . In this section , we first give a formal definition for linear mode connectivity and study how it is affected by different factors such as network width , depth , and task difficulty for a variety of architectures . 2.1 DEFINITIONS . Let fθ ( · ) be a function presented by a neural network with parameter vector θ that includes all parameters and L ( θ ) be the any given loss ( e.g. , train or test error ) of fθ ( · ) . Let Eα ( θ1 , θ2 ) = L ( αθ1 + ( 1 − α ) θ2 ) , for α ∈ [ 0 , 1 ] be the loss of the network created by linearly interpolating between parameters of two networks fθ1 ( · ) and fθ2 ( · ) . The loss barrier B ( θ1 , θ2 ) along the linear path between θ1 and θ2 is defined as the highest difference between the loss occurred when linearly connecting two points θ1 , θ2 and linear interpolation of the loss values at each of them : B ( θ1 , θ2 ) = sup α [ L ( αθ1 + ( 1− α ) θ2 ) ] − [ αL ( θ1 ) + ( 1− α ) L ( θ2 ) ] . ( 1 ) The above definition differs from what was proposed by Frankle et al . ( 2020 ) in that they used 0.5L ( θ1 ) + 0.5L ( θ2 ) instead of αL ( θ1 ) + ( 1 − α ) L ( θ2 ) in our definition . These definitions are the same if L ( θ1 ) = L ( θ2 ) . But if L ( θ1 ) , L ( θ2 ) are different , we find our definition to be more appropriate because it assigns no barrier value to a loss that is changing linearly between θ1 and θ2 . We say that two networks θ1 and θ2 are linear mode connected if the barrier between them along a linear path is ≈ 0 ( Frankle et al. , 2020 ) . It has been observed in the literature that any two minimizers of a deep network can be connected via a non-linear low-loss path ( Garipov et al. , 2018 ; Draxler et al. , 2018 ; Fort & Jastrzebski , 2019 ) . This work examines linear mode connectivity ( LMC ) between minima . Next , we empirically investigate the effect of task difficulty and choices such as architecture family , width and depth on LMC of SGD solutions . 2.2 EMPIRICAL INVESTIGATION : BARRIERS . In this section , we look into barriers between different SGD solutions on all combinations of four architecture families ( MLP ( Rosenblatt , 1961 ) , Shallow CNN ( Neyshabur , 2020 ) , ResNet ( He et al. , 2015 ) and VGG ( Simonyan & Zisserman , 2015 ) ) and four datasets ( MNIST ( LeCun & Cortes , 2010 ) , SVHN ( Netzer et al. , 2011 ) , CIFAR-10 ( Krizhevsky et al. , 2009 ) and CIFAR-100 ( Krizhevsky et al. , 2009 ) ) . The main motivation to use Shallow CNN is to move from fully connected layers ( MLP ) to convolutions . The main difference between Shallow CNN and VGG16 is depth and the main difference between ResNet18 and VGG16 is existence of residual connections . We empirically investigate how different factors such as architecture family , width , depth and task difficulty impact the barrier size2 . We refer to training loss barrier as barrier . For loss barriers on a test set see E.4 . For train and test errors see A.2 . Width : We evaluate the impact of width on the barrier size in Figure 2 . We note that for large values of width the barrier becomes small . This effect starts at lower width for simpler datasets such as MNIST and SVHN compared to CIFAR datasets . A closer look reveals that the barrier increases with width up to a point and beyond that increasing width leads to lower barrier size . This effect is reminiscent of the double descent phenomena ( Belkin et al. , 2019 ; Nakkiran et al. , 2019 ) . Checking the test error ( Figure 8 ) indicates that in our experiments the barrier peak happens at the same size that needed to fit the training data . This phenomena is observed for both fully-connected and convolutional architectures . MLP architectures hit their peak at a lower width compared to CNNs 2In all plots the barrier is evaluated using training loss across 5 different random pairs ( 10 random SGD solutions ) . For easier comparison between all figures , we report the train accuracy barrier in all barrier plots . In our experiments , we observed that evaluating the barrier at α = 1 2 is a reasonable surrogate for taking the supremum over α ( the difference is less than 10−4 ) . Therefore , to save computation , we report the barrier value at α = 1 2 . and a decreasing trend starts earlier . For ResNets the barrier size is saturated at a high value and does not change . The barrier value for VGG architecture on different datasets is also saturated at a high value and does not change by increasing the width . Such similar behavior observed for both ResNets and VGG architectures is due to the effect of depth as discussed in the next paragraph . Depth : We vary network depth in Figure 3 to evaluate its impact on the barrier between optimal solutions obtained from different initializations . For MLPs , we fix the layer width at 210 while adding identical layers as shown along the x-axis . We observe a fast and significant barrier increase as more layers are added . For VGG architecture family we observe significant barriers . This might be due to the effect of convolution or depth . In order to shed light on this observation , we use Shallow CNN ( Neyshabur , 2020 ) with only two convolutional layers . As can be seen in Figure 3 when Shallow CNN has two layers the barrier size is low , while keeping the layer width fixed at 210 and adding more layers increases the barrier size . For residual networks we also consider three ResNet architectures with 18 , 34 and 50 layers and observe the same barrier sizes as VGG for all these depth values . The main overall observation from depth experiments is that for both fully-connected and convolutional architectures , increasing depth increases the barrier size significantly so the effect of depth is not similar to width . This can also be attributed to the observations that deeper networks usually have a less smooth landscape ( Li et al. , 2017 ) . Task difficulty and architecture choice : In Figure 4 we look into the impact of the task difficulty provided by the dataset choice ( MNIST , SVHN , CIFAR-10 , CIFAR-100 , and ImageNet ( Deng et al. , 2009 ) ) and the architecture type ( one-layer MLP with 210 neurons , Shallow CNN with two convolutional layer and width of 210 , VGG-16 with batch-normalization , ResNet18 and ResNet50 ) . Each row in Figure 4a and Figure 4b shows the effect of task difficulty , e.g. , fixing the task to SVHN and moving from MLP to Shallow CNN gives lower test error hence lower barrier size . Each column also represents the effect of architecture on a specific dataset , e.g. , fixing the architecture to Shallow CNN and moving from CIFAR10 to CIFAR100 presents an increase in test error , hence increase in the barrier size . Although deep architectures like VGG16 and ResNet18 present low test error , the discussed effect of depth saturates their barrier at a high level . Figure 4c aggregates the correlation between test error and size of the barrier . For MLP and Shallow CNN we observe a high positive correlation between test error and barrier size across different datasets . Deeper networks ( VGGs , ResNets ) form a cluster in the top-left , with low test error and high barrier size . | The paper makes a bold conjecture that all the solutions that SGD reaches on deep feedforward networks are linearly interpolatable if they are permuted correctly, and if the network is wide enough. They conduct fairly extensive experiments that are unable to refute this conjecture, but do not prove it with certainty. They prove a theorem that shows that such a result holds at random initialization for a fully connected 1-hidden layer ReLU. | SP:e2fa0bdff3b64951e002e8a16b1ca747b0f4e1d6 |
LEARNING GUARANTEES FOR GRAPH CONVOLUTIONAL NETWORKS ON THE STOCHASTIC BLOCK MODEL | 1 INTRODUCTION There is presently a large gap between what can be accomplished in practice using deep learning , and what can be satisfactorily explained and predicted by the theory of deep learning . Nevertheless , the past several years have seen substantial developments in the theory of deep learning ( Ge et al. , 2017 ; Brutzkus & Globerson , 2017 ; Zhang et al. , 2019a ; Goel et al. , 2020 ; Chen et al. , 2020a ) . One factor contributing to the gap between the theory and practice of traditional NNs is that realworld data sets tend to have complex structure that is difficult to capture with formal definitions . For example , popular image classification models are capable of memorizing arbitrary data ( Zhang et al. , 2016 ) , and yet they exhibit astonishing generalization performance on accurately-labeled natural images . Hence , any rigorous proof of the observed generalization performance of deep learning models on image classification tasks will necessarily require assumptions about the data that are sharp enough to separate random inputs from natural images . Because of the difficulty of giving an adequate characterization of real-world data , much of the recent progress in deep learning theory has instead focused on proving results using very simple ( e.g . Gaussian ) input distributions or in distribution-free settings ( Ge et al. , 2017 ; Brutzkus & Globerson , 2017 ; Zhang et al. , 2019a ; Vempala & Wilmes , 2019 ) . Compared to traditional feed-forward ( dense , convolutional , etc . ) NNs , the theory of graph neural networks ( GNNs ) is still in its infancy . On the other hand , it appears substantially easier to give plausible descriptions of the combinatorial structure of real-world graph data sets than , e.g. , to characterize the distribution of natural images ( Drobyshevskiy & Turdakov , 2019 ) . We therefore believe that GNNs offer a natural setting for developing provable guarantees that are able to capture the power of deep learning on real-world datasets . In this paper , we contribute to that goal by giving the first rigorous guarantees of efficient semi-supervised learning of stochastic block models via a GNN . 1.1 GRAPH NEURAL NETWORKS Many natural datasets for diverse machine learning problems have a graph structure , including social networks , molecular structures , and transit networks . In order to efficiently exploit such combinatorial structure , a variety of GNN models have been proposed , tuned for different kinds of tasks . A number of taxonomies of GNN models have been proposed ( Zhou et al. , 2018 ; Wu et al. , 2021 ) ; one of the most essential differences between different GNN models is whether they are meant to label the graph as a whole , or to label individual components of the graph , particularly vertices . From a theoretical perspective , the best understood tasks for GNNs concern labeling the graph as a whole , for example for the task of classifying a graph by its isomorphism type ( Sato , 2020 ) . In particular , it has been established that many GNN models are of comparable power to various versions of the Weisfeiler-Leman hierarchy1 ( Xu et al. , 2018 ; Morris et al. , 2019 ) . Some progress has also been made on the theory of GNNs for vertex-labeling tasks . Recent works by Sato et al . describe the representational power of certain GNN models for tasks such as computing minimum vertex covers ( Sato et al. , 2019 ) . Garg et al . also give bounds on the representational power of GNN models , as well as using Rademacher bounds to estimate the generalization ability of GNNs ( Garg et al. , 2020 ) . Our results concern the task of semi-supervised community detection . In this problem , each vertex belongs to one community , and some subset of the vertices are labeled according to their community membership . The task is to classify the community membership of the remaining vertices . This task has been one of the most intensively studied problems in the GNN literature , but there have not yet been any provable guarantees on the performance of proposed models . We study ( spatial-based ) graph convolutional models similar to the GCN model proposed in Kipf & Welling ( 2017 ) . A single layer of such a model computes weights at each node by aggregating the weights at neighboring nodes and applying an activation function with learned parameters , e.g. , a linear map followed by a ReLU . Many variations on this theme , including various sophisticated training regimes , have been proposed ( Chen et al. , 2017 ; Gao et al. , 2018 ; Li et al. , 2018 ; Zhang et al. , 2019b ; Chen et al. , 2018 ) , but no provable guarantees have been available for the performance of such models on natural data distributions , until the present work . 2 MAIN RESULTS One motivation for GNNs as a target for progress in deep learning theory is that there are well-studied graph distributions that plausibly capture some of the structure of real-world data ( Drobyshevskiy & Turdakov , 2019 ) . For example , even fairly simple preferential attachment models plausibly capture some of the essential structure of the web ( Kumar et al. , 2000 ) . Other graph models naturally capture community structures , the simplest of which is the Stochastic Block Model ( SBM ) ( Holland et al. , 1983 ) . A graph is sampled from a SBM by first partitioning vertices into communities ( with fixed or random sizes ) . Two vertices are connected with probability p if they belong to the same community and probability q if they belong to different communities . In this paper , we consider the case of an SBM with two equal-sized communities in which vertices have label 0 and 1 respectively . We denote the label of vertex x by ℓ ( x ) ∈ { 0 , 1 } . The graphs are parameterized as SBM ( n , p , q ) where n is the number of vertices , p is the probability of an intra-community connection , and q is the probability of a cross-community connection . We allow n to vary ( but will require it to be sufficiently large ) , while p and q are of the form p = a log 3 n n and q = b log 3 n n for some fixed constants a > b . In the semi-supervised setting , the community labels of some portion of the labels are revealed . We assume the label of each vertex is revealed independently with probability λ . The input-layer features at a vertex x is ( 0 , 0 ) if its label is not revealed , ( 1 , 0 ) if its label is revealed to be 0 , and ( 0 , 1 ) if its label is revealed to be 1 . Assumption 2.1 ( Sparse Stochastic Block Model ) . The probabilities of intra and cross-community connections are p = a log 3 n n and q = b log3 n n , where a > b are constants . We study the problem of recovering the communities from such graphs using GNN models . Of course , recovering the communities of an SBM graph has been well-studied and its computational complexity is fully understood in most cases ( Abbe & Sandon , 2015 ; Kawamoto et al. , 2019 ) . SBM models are therefore a natural test-case for understanding the power of GNN models for learning 1Weisfeiler-Leman hierarchy is a polynomial-time iterative algorithms which provides a necessary but insufficient condition for graph isomorphism . community structure , and experimental studies have been done in this setting ( Chen et al. , 2020b ; Yadav et al. , 2019 ) . ( Abbe et al. , 2014 ) shows a sharp threshold in the task of community recovery : ( √ p −√q ) √ n logn > √ 2 . This threshold clearly holds for our case ( at sufficiently large values of n ) , since p = a log 3 n n , q = b log3 n n and a > b . The contribution here is not to learn the community models . Rather it ’ s showing that ( multi-layer ) GCNs solve the classification problem , which is very much not trivial ( it is non-convex , and the training loss curve is empirically non-monotonic ) . Our GNN models will be trained on a graph or several graphs generated by the SBM ( n , p , q ) model , and seek to understand their accuracy on arbitrary SBM ( n , p , q ) graphs not necessarily in the training set but with the same parameters a , b determining p and q ( with n allowed to vary ) . In particular , we study spatial-based graph convolutional models along the lines of the Graph Convolutional Networks ( GCN ) introduced in ( Kipf & Welling , 2017 ) . Each layer of the model computes a feature vector at every vertex of an input graph based on features of nearby vertices in the previous layer . A typical layer-wise update rule is of the form X ( k+1 ) = ϕ ( ÂX ( k ) W ( k ) ) , where •  is a suitably-normalized adjacency matrix of shape n × n where n is the number of vertices . Usually  includes self-loops . • X ( k ) gives the feature vector in the k-th layer at each vertex as a matrix of shape n×mk , where mk is the number of features in layer k. • ϕ is an activation function , such as the ReLU . • W ( k ) are the trainable weights in the k-th layer , a matrix of shape mk ×mk+1 . In our version of this model , we define  = 1n 2 ( p+q ) à , where à = A+ I , A is the adjacency matrix of a given graph , and I is the identity matrix . For the given SBM ( n , p , q ) , a randomly selected vertex has n2 ( p + q ) neighbors in expectation , so  is obtained by normalizing each row of A + I with the average size of a neighborhood . Since very deep GCN models seem to provide little empirical benefit ( Li et al. , 2018 ) , we use a single hidden layer with a softmax output layer . Furthermore , we introduce a bias term b at the second layer . So the model has the following form : f ( X , A ) = softmax ( Âϕ ( ÂXW ( 0 ) ) W ( 1 ) +B ) = softmax ( 4 n2 ( p+ q ) 2 Ãϕ ( ÃXW ( 0 ) ) W ( 1 ) +B ) , ( 1 ) where X is the input feature of the graph and W ( 0 ) , W ( 1 ) and B are trainable parameters . Let h denote the number of hidden features , which equals the number of columns of W ( 0 ) and the number of rows of W ( 1 ) . We define the accuracy of the model as the probability of predicting correctly the label of a single vertex in a randomly generated SBM ( n , p , q ) graph where the label of each vertex is revealed with probability λ . We can now state our main result . Theorem 2.2 . For any ϵ > 0 and δ > 0 , given a GCN model with 1δ ≤ h ≤ n hidden features and with parameters initialized independently from N ( 0 , 1 ) , if training graphs are sampled from SBM ( n , p , q ) with n ≥ max ( Ω ( 1ϵ ) 2 , Ω ( 1δ ) ) and the label of each vertex revealed with probability λ , and if the model is trained by coordinate descent for k = O ( log log 1ϵ ) epochs , then with probability ≥ 1− δ , the model achieves accuracy ≥ 1− 4ϵ . Remark . We treat λ as constants , so it is omitted in the big O and Ω notation in the sampling and training complexity . We emphasize that the novelty of this theorem is not in learning two-class SBM models as such ; this is a long-solved problem . Instead , this is the first proof of efficient learning for a GCN on semi-supervised community detection tasks using a natural family of random graph models . 3 PRELIMINARIES In this section , we first introduce notations ( a table of notations is also shown in the appendix for readers ’ convenience ) and some interpretations . Then we introduce the structure of the paper . Given a vertex y , denote the row of ÃX corresponding to y as ( ty0 , t y 1 ) , so t y 0 and t y 1 give the numbers of neighbors of y ( including perhaps y itself ) with revealed labels in class 0 and class 1 respectively . Let W ( 0 ) = ( α1 α2 · · · αh α′1 α ′ 2 · · · α′h ) , W ( 1 ) = β1 β ′ 1 β2 β ′ 2 ... ... βh β ′ h , B = b0 b1 b0 b1 ... ... b0 b1 . Then αit y 0 + α ′ it y 1 , 1 ≤ i ≤ h gives h features of vertex y in the hidden layer . The inner product of the yth row of ϕ ( ÃXW ( 0 ) ) and the columns of W ( 1 ) gives weighted sums of features of y : ∑h i=1 βiϕ ( αit y 0 + α ′ it y 1 ) and ∑h i=1 β ′ iϕ ( αit y 0 + α ′ it y 1 ) , where ϕ represents the ReLU function . Given a vertex x , the row of Âϕ ( ÂXW ( 0 ) ) W ( 1 ) corresponding to x is denoted by ( f0 ( x ) , f1 ( x ) ) and is of the form ( 4 n2 ( p+ q ) 2 ∑ y∈G 1 [ y ∼ x ] h∑ i=1 βiϕ ( αit y 0+α ′ it y 1 ) , 4 n2 ( p+ q ) 2 ∑ y∈G 1 [ y ∼ x ] h∑ i=1 β′iϕ ( αit y 0+α ′ it y 1 ) ) , ( 2 ) where 1 [ y ∼ x ] is equal to 1 if y and x are connected , 0 otherwise . Denote f i0 ( x ) : = 4βi n2 ( p+ q ) 2 ∑ y∈G 1 [ y ∼ x ] ϕ ( αity0+α′it y 1 ) f i 1 ( x ) : = 4β′i n2 ( p+ q ) 2 ∑ y∈G 1 [ y ∼ x ] ϕ ( αity0+α′it y 1 ) , so f0 ( x ) = ∑h i=1 f i 0 ( x ) and f1 ( x ) = ∑h i=1 f i 1 ( x ) . Denote gj ( x ) : = fj ( x ) + bj , j = 0 , 1. , where ( g0 ( x ) , g1 ( x ) ) represents the logit of the model corresponding to x. Denote ∆ ( x ) : = g0 ( x ) − g1 ( x ) . In order to make correct predictions , we need ∆ ( x ) > 0 when ℓ ( x ) = 0 and ∆ ( x ) < 0 when ℓ ( x ) = 1 . The bias term B is useful in our analysis because its derivative controls how imbalanced the current loss is between the classes . In training we consider the cross-entropy loss denoted as L , and have E [ ∂L ∂b0 ] = −E [ ∂L ∂b1 ] = −1 2 ( E [ Z|ℓ ( x ) = 0 ] − E [ Z|ℓ ( x ) = 1 ] ) , where Z = exp ( g1−ℓ ( x ) ( x ) ) exp ( g0 ( x ) ) +exp ( g1 ( x ) ) . Z can be regarded as a measure of wrong prediction : the numerator is the exponential of the output corresponding to the wrong label and the denominator is a normalizer . It is easy to see that Z > 12 if the prediction is wrong ; Z < 1 2 if prediction is correct . When∣∣E [ ∂L∂b0 ] ∣∣ ≈ 0 , the model ’ s loss is balanced in the sense of that ∣∣E [ Z|ℓ ( x ) = 0 ] −E [ Z|ℓ ( x ) = 1 ] ∣∣ ≈ 0 . In order to have balanced performance in every epoch , we train the model through coordinate descent instead of conventional gradient descent . Specifically , in each epoch we first update b0 and b1 until∣∣E [ ∂L∂b0 ] ∣∣ is smaller than some threshold . Then we update the other parameters . The performance of the model depends on the concentration and separation of ∆ ( x ) for ℓ ( x ) = 0 and ℓ ( x ) = 1 respectively . In Section 4 we show that ∆ ( x ) is concentrated at one of two values , denoted by µ0 and µ1 , depending only on whether the label ℓ ( x ) is 0 or 1 . The proof depends on different parameter regimes of hidden neurons . In Section 5 , we analyze the dynamics of hidden neurons throughout training to show that the concentration and separation improve at a controlled rate . Based on this information , in Section 6 we prove the main theorem . Section 7 shows some experimental results to verify our theory . The paper ends with future directions in Section 8 . 4 CONCENTRATION AND SEPARATION OF OUTPUT The difference of the logits is ∆ ( x ) = g0 ( x ) − g1 ( x ) = f0 ( x ) − f1 ( x ) + b0 − b1 = h∑ i=1 ∆i ( x ) + b0 − b1 , where ∆i ( x ) = f i 0 ( x ) − f i1 ( x ) = 4 ( βi − β′i ) n2 ( p+ q ) 2 ∑ y∈G 1 [ y ∼ x ] ϕ ( αity0 + α′it y 1 ) . For brevity , we write ∆ ( x ) as ∆ and ∆i ( x ) as ∆i . In order to estimate ∆ , we need to estimate each ∆i , 1 ≤ i ≤ h. Our fine-grained analysis of the dynamics of coordinate descent on GCNs relies on a classification of neurons into three families based on the sign and scale of the parameters : “ good type ” , “ bad type ” and “ harmless type ” . The names also indicate whether the neuron has positive contribution to the value of µ0 − µ1 , where µ0 and µ1 are high probability estimate of ∆ ( x ) for ℓ ( x ) = 0 and 1 respectively . We show that “ good type ” neuron makes positive contribution ; the contribution of “ bad type ” neuron is negative but lower bounded ; “ harmless type ” neuron ’ s contribution is non negative ( see Corollary A.4 and the remark following it ) . We will specifically describe parameter regime of each type in the following subsections . We analyze the dynamics of these types throughout coordinate descent in the next section . First we give some definitions . Definition 1 . For 1 ≤ i ≤ h , we call ( αi , α′i , βi , β′i ) the i-th neuron of the model , where ( αi , α′i ) ⊤ is the i-th column of W ( 0 ) , ( βi , β′i ) is the i-th row of W ( 1 ) . Definition 2 . We say that the i-th neuron is order-aligned if ( αi − α′i ) ( βi − β′i ) > 0 , otherwise we say it is order-misaligned . 4.1 CLASSIFICATION OF NEURON PARAMETER REGIMES We say the i-th neuron is of “ good type ” if it satisfies either ( G1 ) or ( G2 ) below . ( There is also the symmetric case obtained by switching αi with α′i and βi with β ′ i . For brevity , we only consider the cases that αi > α′i . This applies to the “ bad ” and “ harmless ” types below as well ) . Neurons in this type are order-aligned and both αi and α′i are positive or the ratio between αi and α ′ i is large enough . αi > α ′ i > 0 and βi > β ′ i ( G1 ) αi > 0 > α ′ i , ∣∣∣∣αiα′i ∣∣∣∣ > 1 and βi > β′i ( G2 ) We say the i-th neuron is of “ bad type ” if it satisfies either ( B1 ) , ( B2 ) or ( B3 ) . Neurons in this type are order-misaligned and αi , α′i are either both positive or have the opposite signs . αi > α ′ i > 0 and βi < β ′ i ( B1 ) αi > 0 > α ′ i , ∣∣∣∣αiα′i ∣∣∣∣ > qp ( 1 + log− 13 n ) and βi < β′i ( B2 ) αi > 0 > α ′ i , ∣∣∣∣αiα′i ∣∣∣∣ ≤ qp ( 1 + log− 13 n ) ( B3 ) We say that the i-th neuron is of “ harmless type ” if it satisfies either ( H1 ) or ( H2 ) : αi > 0 > α ′ i , ∣∣∣∣αiα′i ∣∣∣∣ ∈ ( qp ( 1 + log− 13 n ) , 1 ] and βi > β′i ( H1 ) αi ≤ 0 and α′i ≤ 0 ( H2 ) 4.2 CONCENTRATION AND SEPARATION Theorem 4.1 . If the i-th neuron is of “ good type ” satisfying ( G1 ) or of “ bad type ” satisfying ( B1 ) , then for ℓ ( x ) = 0 : P [ ∣∣∆i − λ ( βi − β′i ) ( p+ q ) 2 [ ( p2 + q2 ) αi + 2pqα ′ i ] ∣∣ ≤ ( αi − α′i ) ( βi − β′i ) O ( log − 12 n ) |ℓ ( x ) = 0 ] ≥ 1−O ( 1 n2 ) , for ℓ ( x ) = 1 : P [ ∣∣∆i − λ ( βi − β′i ) ( p+ q ) 2 [ 2pqαi + ( p 2 + q2 ) α′i ] ∣∣ ≤ ( αi − α′i ) ( βi − β′i ) O ( log − 12 n ) |ℓ ( x ) = 1 ] ≥ 1−O ( 1 n2 ) . Similar concentration hold for neurons satisfying ( G2 ) , ( B2 ) and ( B3 ) , and for neurons of “ harmless type. ” We apply the method of bounded differences to show the concentration . The details are shown in the appendix . Given the concentration of ∆i for each type of neurons , we estimate the concentration of the output ∆ = ∑h i=1 ∆i + b0 − b1 . For the i-th neuron , we denote the high-probability estimate of ∆i given in the statement of Theorem 4.1 as mi0 when ℓ ( x ) = 0 and m i 1 when ℓ ( x ) = 1 . By union bound , we have the following corollary . Corollary 4.2 . Given a vertex x ∈ G with label unrevealed , we have P [ |∆− µj | ≤ δ|ℓ ( x ) = j ] ≥ 1−O ( 1 n ) , ( 3 ) where µj = ( h∑ i=1 mij ) + b0 − b1 , j = 0 , 1 δ = h∑ i=1 |αi − α′i||βi − β′i|O ( log − 12 n ) . For any ϵ > 0 , we require the probability of concentration in ( 3 ) to be at least 1− ϵ̃ , where ϵ̃ = o ( ϵ ) . If we choose ϵ̃ = ϵ2 , then we set 1−O ( 1n ) ≥ 1− ϵ 2 , i.e.n ≥ Ω ( 1ϵ ) 2 . Our following analysis will be based on this condition . From Theorem 4.1 , we have the following result about the value of mi0 −mi1 . Corollary 4.3 . • If the i-th neuron is of “ good type ” and satisfies ( G1 ) , then mi0 −mi1 = λ|αi − α′i||βi − β′i| ( p− q p+ q ) 2 . • If the i-th neuron is of “ bad type ” and satisfies ( B1 ) , then mi0 −mi1 = −λ|αi − α′i||βi − β′i| ( p− q p+ q ) 2 . • If the i-th neuron is of “ harmless type ” and satisfies ( H1 ) , then mi0 −mi1 = λ|βi − β′i||pαi + qα′i| p− q ( p+ q ) 2 . Similar results for neurons satisfying ( G2 ) , ( B2 ) , ( B3 ) and ( H1 ) are stated in the appendix , along with the proof . Remark . • As we can see from Corollary 4.3 , the value of mi0 − mi1 is positive for “ good type ” neurons , non-negative for “ harmless type ” neurons and may be negative ( but lower bounded ) for “ bad type ” neurons . Since positive values of mi0 −mi1 decrease the loss of the model , this explains the names for the types of neurons . • mi0 −mi1 is proportional to |αi −α′i||βi − β′i| . In the next section , we analyze the dynamics of the parameters αi , α′i , βi , β ′ i . Using our understanding of these dynamics , in Theorem 6.2 we present a refined result about the separation of output which only depends on the initialization of parameters . • Let c : = µ0 − µ1 = ∑h i=1 ( m i 0 −mi1 ) . By the two corollaries above , we have δ = o ( |c| ) . The balanced loss guaranteed by the bias term and the coordinate descent scheme ensure that µ0 = Ω ( c ) and µ1 = Ω ( c ) . It then follows that if the loss is sufficiently small , both µ0 and µ1 have correct sign , i.e . µ0 > 0 > µ1 . ( Otherwise , due to concentration of the output , the model makes wrong prediction and the loss is large ) . So we will eventually have δ = o ( µ0 ) and δ = o ( |µ1| ) . 5 DYNAMICS OF PARAMETERS In this section , we describe the dynamics of each type of neurons through coordinate descent , which can be visualized in the following figure in which the arrows indicate movement between types that can happen with non-negligible probability . There are two noteworthy points from this figure . First , “ good type ” parameters are preserved under coordinate descent . Second , there are no arrows coming into “ bad type ” except from itself . These dynamics are proved by estimating the gradient with respect to the loss function for each type of neuron . Because of the non-linearity of the activation , we rely heavily on the concentration result proved above to get tight estimates . Without these concentration results , even estimating the sign of the gradient seems difficult . The proof and experiments about the dynamics of hidden neurons are deferred to the appendix . 6 LEARNING GUARANTEE In this section , we prove our main result which states that with high probability a trained GCN can detect communities in SBM with any desired accuracy . The proof is based on the following theorem which shows that if µ0 and µ1 are separated enough , then the model achieves high accuracy . Theorem 6.1 . ∀ϵ > 0 , provided that the difference between µ0 and µ1 is large enough : σ ( −µ0−µ12 ) < ϵ 2 , if ∣∣E [ ∂L∂b0 ] ∣∣ < ϵ4 , then P [ ∆ < 0|ℓ ( x ) = 0 ] < 4ϵ , P [ ∆ > 0|ℓ ( x ) = 1 ] < 4ϵ , where σ ( x ) : = 11+exp ( −x ) represents the sigmoid function . Next we show that the model can achieve such separation between µ0 and µ1 through coordinate descent . In order to make constant update of parameters at every epoch , we set an adaptive learning rate ηk = 1E [ Z ( k ) ] where Z ( k ) is the value of Z at the k-th epoch . We first refine Corollary 4.3 about the separation of output for each type of neuron ( mi0 −mi1 ) using the dynamics of parameters . Theorem 6.2 ( separation of output ) . Let mi0 and mi1 be defined as in Section 4 , train the model for k epochs by the defined coordinate descent with adaptive learning rate ηk = 1E [ Z ( k ) ] , • if the i-th neuron is of “ good type ” , then mi0 −mi1 ≥ A ( 0 ) i B ( 0 ) i λ 2 ( p− q p+ q ) 2 ( 1 + √ 2λ 8 ( p− q p+ q ) 2 ) 2k • if the i-th neuron is of “ bad type ” , then mi0 −mi1 ≥ −k ( ( A ( 0 ) i ) 2 + ( B ( 0 ) i ) 2 ) λp ( p− q ) ( p+ q ) 2 , • if the i-th neuron is of “ harmless type ” , then mi0 −mi1 ≥ 0 , where A ( 0 ) i = α ( 0 ) i − α ′ ( 0 ) i , B ( 0 ) i = β ( 0 ) i − β ′ ( 0 ) i . Next we present a result about initialization , which shows that with high probability , there are enough “ good type ” neurons and parameters have appropriate scale . Lemma 6.3 . Suppose all parameters in W ( 0 ) and W ( 1 ) are initialized independently following standard normal distribution . Then the number hg of neurons initialized as “ good type ” satisfies P [ hg ≥ h8 ] ≥ 1− exp ( − h 64 ) . Furthermore , P [ h∑ i=1 ( αi−α′i ) 2+ ( βi−β′i ) 2 ≤ 5h ] ≥ 1−O ( 1 h ) , P [ ∑ the i-th neuron initialized as “ good type ” |αi−α′i||βi−β′i| ≥ h 80 ] ≥ 1−O ( 1 h ) . Now we can prove the final result . Proof of Theorem 2.2 . First we show that if the loss E [ Z ] is small enough , the model achieves desired accuracy . Indeed , if E [ Z ] < 2ϵ , since E [ Z ] = E [ Z|pred is wrong ] P [ pred is wrong ] +E [ Z|pred is correct ] P [ pred is correct ] ≥ 1 2 P [ pred is wrong ] , we have P [ pred is wrong ] ≤ 4ϵ , i.e. , P [ pred is correct ] > 1− 4ϵ . Otherwise , E [ Z ] ≥ 2ϵ , since E [ Z ] = 12 ( E [ Z|ℓ ( z ) = 0 ] + E [ Z|ℓ ( z ) = 1 ] ) , we have E [ Z|ℓ ( z ) = 0 ] +E [ Z|ℓ ( z ) = 1 ] ≥ 4ϵ . On the other hand , ∣∣E [ ∂L∂b0 ] ∣∣ < ϵ implies that ∣∣E [ Z|ℓ ( z ) = 0 ] −E [ Z|ℓ ( z ) = 1 ] ∣∣ < 2ϵ . By Theorem 6.2 , µ0 − µ1 = h∑ i=1 ( mi0 −mi1 ) = ∑ i∈ “ good ” ( mi0 −mi1 ) + ∑ i∈ “ bad ” ( mi0 −mi1 ) + ∑ i∈ “ harmless ” ( mi0 −mi1 ) ≥ λ 2 ( p− q p+ q ) 2 ( 1 + √ 2λ 8 ( p− q p+ q ) 2 ) 2k ∑ i∈ “ good ” A ( 0 ) i B ( 0 ) i − k λp ( p− q ) ( p+ q ) 2 ∑ i∈ “ bad ” ( ( A ( 0 ) i ) 2 + ( B ( 0 ) i ) 2 ) . By Lemma 6.3 , with probability ≥ 1−O ( 1h ) , ∑ i∈ “ good ” A ( 0 ) i B ( 0 ) i ≥ h 80 , ∑ i∈ “ bad ” ( ( A ( 0 ) i ) 2 + ( B ( 0 ) i ) 2 ) ≤ 5h . Since h ≥ 1δ , then with probability ≥ 1− δ , µ0 − µ1 ≥ h ( λ 160 ( p− q p+ q ) 2 ( 1 + √ 2λ 8 ( p− q p+ q ) 2 ) 2k − k 5λp ( p− q ) ( p+ q ) 2 ) ≥ h ( C1 ( 1 + C2 ) 2k − C3k ) , ( 4 ) where C1 , C2 and C3 are constants determined by p , q and λ . By Theorem 6.1 , if ( 4 ) ≥ 2 log 2ϵ ( then σ ( − µ0−µ1 2 ) ≤ ϵ 2 ) , then the model achieves accuracy ≥ 1− 4ϵ . It ’ s sufficient to have C1 ( 1 + C2 ) 2k − C3k ≥ 2 log 2 ϵ , i.e . k = O ( log log 1ϵ ) . 7 EXPERIMENTS We show some experiments verifying Theorem 2.2 . In particular , our experiments demonstrate that accuracy increases with n , the probability of high-accuracy models increases with h , and coordinate descent is able to recovery high-accuracy models in the sparse regime of Assumption 2.1 . Additional plots demonstrating the dynamics of hidden neurons with their ratios and differences can be seen in the appendix . Experiment 1 In this experiment , we plot the an estimate of the accuracy versus epoch for varying n. The parameters p , q of SBM follow Assumption 2.1 , where we choose a = 1.0 and b = 0.7 . We set h = 20 , λ = 0.3 and run 40 independent experiments for n = 250 , 500 and 1000 respectively . In each experiment we train the model for 100 epochs . The training set has 40 randomly generated graphs from SBM ( n , p , q ) . We validate the performance by the percentage of correct predictions on 200 random vertices , each from a randomly generated graph . The result is shown Figure 2 . The shaded region for each n is obtained from the max , min and mean percentage of the 40 experiments . The result verifies Theorem 2.2 which shows that the accuracy of the model increases with n. Experiment 2 In this experiment , we show the effect of the number of hidden neurons h. The parameters of SBM are the same as Experiment 1 . We set h = 2 , 5 , 20 . For each pair of ( n , h ) we run 40 independent experiments and show the distribution of validation in Figure 3 . From the top row to the bottom , n increases from 250 to 1000 . From the left column to the right , h increases from 2 to 20 . In each plot , the x-axis represents the accuracy , while y-axis represents the count of experiments . According to Theorem 2.2 , the probability of achieving high accuracy is 1−O ( 1/h ) and the accuracy increases with n. We can see that in each row of Figure 3 , as h increases , we have lager probability to achieve high accuracy ; in each column , as n increases , the model achieves higher accuracy . The results verify our theory in the paper . 8 FUTURE DIRECTIONS Graph neural networks offer a promising setting for progress on the more general theory of deep learning , because random graph models more plausibly capture the structure of real-world data compared to , e.g. , the Gaussian inputs often used to prove deep learning guarantees for traditional feed-forward neural networks . This paper has initiated the project of proving training guarantees for semi-supervised learning using GCNs on SBM models , but much more work remains to be done . Arguably the sparsest SBM models ( expected constant degree ) are the most compelling from the perspective of modeling real-world communities , so it would be interesting to extend these results to that setting . Models with more than two blocks , or overlapping communities ( Petti & Vempala , 2018 ) would be even closer to real-world structure . We hope this initial step spurs further interest in provable guarantees for training neural networks using plausible models of real-world data as the input distribution . REFERENCES Emmanuel Abbe and Colin Sandon . Recovering communities in the general stochastic block model without knowing the parameters . arXiv preprint arXiv:1506.03729 , 2015 . Emmanuel Abbe , Afonso S. Bandeira , and Georgina Hall . Exact recovery in the stochastic block model , 2014 . Alon Brutzkus and Amir Globerson . Globally optimal gradient descent for a convnet with gaussian inputs . In International conference on machine learning , pp . 605–614 . PMLR , 2017 . Jianfei Chen , Jun Zhu , and Le Song . Stochastic training of graph convolutional networks with variance reduction . arXiv preprint arXiv:1710.10568 , 2017 . Jie Chen , Tengfei Ma , and Cao Xiao . Fastgcn : fast learning with graph convolutional networks via importance sampling . arXiv preprint arXiv:1801.10247 , 2018 . Sitan Chen , Adam R. Klivans , and Raghu Meka . Learning deep relu networks is fixed-parameter tractable , 2020a . Zhengdao Chen , Xiang Li , and Joan Bruna . Supervised community detection with line graph neural networks , 2020b . Mikhail Drobyshevskiy and Denis Turdakov . Random graph modeling : A survey of the concepts . ACM Comput . Surv. , 52 ( 6 ) :1–36 , December 2019 . Hongyang Gao , Zhengyang Wang , and Shuiwang Ji . Large-scale learnable graph convolutional networks . In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining , pp . 1416–1424 , 2018 . Vikas Garg , Stefanie Jegelka , and Tommi Jaakkola . Generalization and representational limits of graph neural networks . In International Conference on Machine Learning , pp . 3419–3430 . PMLR , 2020 . Rong Ge , Jason D. Lee , and Tengyu Ma . Learning one-hidden-layer neural networks with landscape design . CoRR , abs/1711.00501 , 2017 . Surbhi Goel , Aravind Gollakota , Zhihan Jin , Sushrut Karmalkar , and Adam Klivans . Superpolynomial lower bounds for learning one-layer neural networks using gradient descent . In International Conference on Machine Learning , pp . 3587–3596 . PMLR , 2020 . Paul W Holland , Kathryn Blackmond Laskey , and Samuel Leinhardt . Stochastic blockmodels : First steps . Social networks , 5 ( 2 ) :109–137 , 1983 . Tatsuro Kawamoto , Masashi Tsubaki , and Tomoyuki Obuchi . Mean-field theory of graph neural networks in graph partitioning . Journal of Statistical Mechanics : Theory and Experiment , 2019 ( 12 ) : 124007 , dec 2019. doi : 10.1088/1742-5468/ab3456 . URL https : //doi.org/10.1088/ 1742-5468/ab3456 . Thomas N. Kipf and Max Welling . Semi-supervised classification with graph convolutional networks , 2017 . R. Kumar , P. Raghavan , S. Rajagopalan , D. Sivakumar , A. Tomkins , and E. Upfal . Stochastic models for the web graph . In Proceedings 41st Annual Symposium on Foundations of Computer Science , pp . 57–65 , 2000. doi : 10.1109/SFCS.2000.892065 . Qimai Li , Zhichao Han , and Xiao-Ming Wu . Deeper insights into graph convolutional networks for semi-supervised learning . In Proceedings of the AAAI Conference on Artificial Intelligence , volume 32 , 2018 . Christopher Morris , Martin Ritzert , Matthias Fey , William L Hamilton , Jan Eric Lenssen , Gaurav Rattan , and Martin Grohe . Weisfeiler and leman go neural : Higher-order graph neural networks . In Proceedings of the AAAI Conference on Artificial Intelligence , volume 33 , pp . 4602–4609 , 2019 . Samantha Petti and Santosh S. Vempala . Approximating sparse graphs : The random overlapping communities model , 2018 . Ryoma Sato . A survey on the expressive power of graph neural networks , 2020 . Ryoma Sato , Makoto Yamada , and Hisashi Kashima . Approximation ratios of graph neural networks for combinatorial problems . arXiv preprint arXiv:1905.10261 , 2019 . Santosh Vempala and John Wilmes . Gradient descent for one-hidden-layer neural networks : Polynomial convergence and sq lower bounds . In Conference on Learning Theory , pp . 3115–3117 . PMLR , 2019 . Z. Wu , S. Pan , F. Chen , G. Long , C. Zhang , and P. S. Yu . A comprehensive survey on graph neural networks . IEEE Transactions on Neural Networks and Learning Systems , 32 ( 1 ) :4–24 , 2021 . Keyulu Xu , Weihua Hu , Jure Leskovec , and Stefanie Jegelka . How powerful are graph neural networks ? arXiv preprint arXiv:1810.00826 , 2018 . Prateek Yadav , Madhav Nimishakavi , Naganand Yadati , Shikhar Vashishth , Arun Rajkumar , and Partha Talukdar . Lovasz convolutional networks . In The 22nd International Conference on Artificial Intelligence and Statistics , pp . 1978–1987 . PMLR , 2019 . Chiyuan Zhang , Samy Bengio , Moritz Hardt , Benjamin Recht , and Oriol Vinyals . Understanding deep learning requires rethinking generalization . arXiv preprint arXiv:1611.03530 , 2016 . Xiao Zhang , Yaodong Yu , Lingxiao Wang , and Quanquan Gu . Learning one-hidden-layer relu networks via gradient descent . In The 22nd International Conference on Artificial Intelligence and Statistics , pp . 1524–1534 . PMLR , 2019a . Yingxue Zhang , Soumyasundar Pal , Mark Coates , and Deniz Ustebay . Bayesian graph convolutional neural networks for semi-supervised classification . Proceedings of the AAAI Conference on Artificial Intelligence , 33 ( 01 ) :5829–5836 , 2019b . Jie Zhou , Ganqu Cui , Zhengyan Zhang , Cheng Yang , Zhiyuan Liu , Lifeng Wang , Changcheng Li , and Maosong Sun . Graph neural networks : A review of methods and applications . arXiv preprint arXiv:1812.08434 , 2018 . A CONCENTRATION AND SEPARATION OF OUTPUT Let N ( x , 0 ) , N ( x , 1 ) denote the neighborhood of vertex x with label 0 and 1 respectively , i.e . N ( x , 0 ) = { y ∈ G , y ∼ x , ℓ ( y ) = 0 } , N ( x , 1 ) = { y ∈ G , y ∼ x , ℓ ( y ) = 1 } . By the definition of SBM , both |N ( x , 0 ) | and |N ( x , 1 ) | are binomial random variables . For ℓ ( x ) = 0 , |N ( x , 0 ) | ∼ B ( n2 , p ) , |N ( x , 1 ) | ∼ B ( n 2 , q ) and for ℓ ( x ) = 1 , |N ( x , 0 ) | ∼ B ( n 2 , q ) , |N ( x , 1 ) | ∼ B ( n 2 , p ) . Moreover , ty0 and t y 1 are also binomial random variables , for ℓ ( y ) = 0 , t y 0 ∼ B ( nλ2 , p ) , t y 1 ∼ B ( nλ2 , q ) , similarly for ℓ ( y ) = 1 . Our following analysis is based on the condition that |N ( x , 0 ) | , |N ( x , 1 ) | , tx0 and tx1 are in their high probability range for all x ∈ G. Specifically we require the condition that for all ℓ ( x ) = 0 , ( similar conditions for ℓ ( x ) = 1 are omitted ) : ∣∣∣∣|N ( x , 0 ) | − np2 ∣∣∣∣ ≤ O ( np ) 56 , ∣∣∣∣|N ( x , 1 ) | − nq2 ∣∣∣∣ ≤ O ( nq ) 56 ; ( Cond ) ∣∣∣∣tx0 − nλp2 ∣∣∣∣ ≤ O ( np ) 56 , ∣∣∣∣tx1 − nλq2 ∣∣∣∣ ≤ O ( nq ) 56 . By tail bound of binomial random variables and union bound , we have P [ ( Cond ) ] ≥ 1− 1 n2 . Under this condition , we show the concentration of ∆i for each type . A.1 “ GOOD TYPE ” NEURONS For convenience , according to the activation pattern of ϕ ( αit y 0 + α ′ it y 1 ) , we further divide ( G2 ) into subcases ( G2,1 ) , ( G2,2 ) and ( G2,3 ) by to the ratio of ∣∣αi α′i ∣∣ . For example , in ( G1 ) and ( G2,1 ) , ϕ ( αit y 0 + α ′ it y 1 ) is active for both ℓ ( y ) = 0 and ℓ ( y ) = 1 ; in ( G2,1 ) , it is only active for ℓ ( y ) = 0.∣∣∣∣αiα′i ∣∣∣∣ > pq ( 1 + log− 13 n ) ( G2,1 ) 1 < ∣∣∣∣αiα′i ∣∣∣∣ < pq ( 1− log− 13 n ) ( G2,2 ) p q ( 1− log− 1 3 n ) ≤ ∣∣∣∣αiα′i ∣∣∣∣ ≤ pq ( 1 + log− 13 n ) ( G2,3 ) We have the following estimation of ∆i in “ good type ” . Theorem A.1 ( concentration of output from “ good type ” neurons ) . If the i-th neuron is of “ good type ” , then • in both ( G1 ) and ( G2,1 ) : P [ ∣∣∆i− λ ( βi − β′i ) ( p+ q ) 2 [ ( p2+ q2 ) αi+2pqα ′ i ] ∣∣ ≤ ( αi−α′i ) ( βi−β′i ) O ( log− 12 n ) |ℓ ( x ) = 0 ] ≥ 1−O ( 1 n2 ) P [ ∣∣∆i− λ ( βi − β′i ) ( p+ q ) 2 [ 2pqαi+ ( p 2+ q2 ) α′i ] ∣∣ ≤ ( αi−α′i ) ( βi−β′i ) O ( log− 12 n ) |ℓ ( x ) = 1 ] ≥ 1−O ( 1 n2 ) • in ( G2,2 ) : P [ ∣∣∆i − λ ( βi − β′i ) p ( p+ q ) 2 ( pαi + qα ′ i ) ∣∣ ≤ ( αi − α′i ) ( βi − β′i ) O ( log− 12 n ) |ℓ ( x ) = 0 ] ≥ 1−O ( 1 n2 ) P [ ∣∣∆i − λ ( βi − β′i ) q ( p+ q ) 2 ( pαi + qα ′ i ) ∣∣ ≤ ( αi − α′i ) ( βi − β′i ) O ( log− 12 n ) |ℓ ( x ) = 1 ] ≥ 1−O ( 1 n2 ) • in ( G2,3 ) : P [ ∣∣∆i − λ ( βi − β′i ) p ( p+ q ) 2 ( pαi + qα ′ i ) ∣∣ ≤ ( αi − α′i ) ( βi − β′i ) O ( log− 12 n ) |ℓ ( x ) = 0 ] ≥ 1−O ( 1 n2 ) P [ ∣∣∆i − λ ( βi − β′i ) q ( p+ q ) 2 ( pαi + qα ′ i ) ∣∣ ≤ ( αi − α′i ) ( βi − β′i ) O ( log− 12 n ) |ℓ ( x ) = 1 ] ≥ 1−O ( 1 n2 ) . Proof . We have ∆i = ( βi − β′i ) ∑ y∈G 1 [ y ∼ x ] 4ϕ ( αit y 0+α ′ it y 1 ) n2 ( p+q ) 2 . We apply the method of averaged bounded difference [ book ] to estimate ∆i . In different parameter regimes , ϕ ( αit y 0 + α ′ it y 1 ) has different activation patterns . In ( G1 ) and ( G2,1 ) , ϕ ( αit y 0 + α ′ it y 1 ) is active with probability 1 − O ( 1n2 ) for both ℓ ( y ) = 0 and ℓ ( y ) = 1 . For ℓ ( x ) = 0 , at first we estimate E [ ∆i ] . By condition ( Cond ) : ∣∣∣∣E [ ∆i ] − λ ( βi − β′i ) ( p+ q ) 2 [ ( p2 + q2 ) αi + 2pqα′i ] ∣∣∣∣ ≤ ( αi − α′i ) ( βi − β′i ) O ( log−1 n ) . Let Yj = 4ϕ ( αit yj 0 +α ′ it yj 1 ) n2 ( p+q ) 2 , then ∆i = ( βi − β ′ i ) ∑ j Yj . Based on condition ( Cond ) , ∣∣Yj − 2λ ( pαi+qα ′ i ) n ( p+q ) 2 ∣∣ ≤ ( αi − α′i ) O ( log− 72 n ) for ℓ ( yj ) = 0 . For any ak , a′k , ∣∣∣∣E [ Yk|Y1 , · · · , Yk−1 , Yk = ak ] − E [ Yk|Y1 , · · · , Yk−1 , Yk = a′k ] ∣∣∣∣ ≤ ( αi − α′i ) O ( log− 72 n ) . Moreover , when the number of vertices with revealed labels are fixed , ∣∣∣∣E [ Yj |Y1 , · · · , Yk−1 , Yk = ak ] − E [ Yj |Y1 , · · · , Yk−1 , Yk = a′k ] ∣∣∣∣ ≤ ( αi − α′i ) O ( log−6 n ) , for j ≥ k. By condition ( Cond ) , there are at most O ( log3 n ) non-zero terms for Yk , 1 ≤ k ≤ n. So∣∣∣∣E [ ∑ j Yj |Y1 , · · · , Yk−1 , Yk = ak ] − E [ ∑ j Yj |Y1 , · · · , Yk−1 , Yk = a′k ] ∣∣∣∣ ≤ ( αi − α′i ) O ( log− 72 n ) , for 1 ≤ k ≤ n. By the method of averaged bounded difference , we have P [ ∣∣∆i−λ ( βi − β′i ) ( p+ q ) 2 [ ( p2+q2 ) αi+2pqα ′ i ] ∣∣ ≤ ( αi−α′i ) ( βi−β′i ) O ( log n ) − 12 |ℓ ( x ) = 0 ] ≥ 1−O ( 1n2 ) . Other regimes can be proved similarly . A.2 “ BAD TYPE ” NEURONS For convenience of our analysis , we further divide ( B2 ) into subcases ( B2,1 ) , ( B2,2 ) and ( B2,3 ) according to the ratio of ∣∣αi α′i ∣∣ ∣∣∣∣αiα′i ∣∣∣∣ > pq ( 1 + log− 13 n ) ( B2,1 ) ∣∣∣∣αiα′i ∣∣∣∣ ∈ ( qp ( 1 + log− 13 n ) , pq ( 1− log− 13 n ) ] ( B2,2 ) ∣∣∣∣αiα′i ∣∣∣∣ ∈ ( pq ( 1− log− 13 n ) , pq ( 1 + log− 13 n ) ] ( B2,3 ) We have the following estimation of ∆i in “ bad type ” . Theorem A.2 ( concentration of output from “ bad type ” neurons ) . If the i-th neuron is of “ bad type ” , we have : • in ( B1 ) and ( B2,1 ) : P [ ∣∣∆i − λ ( βi − β′i ) ( p+ q ) 2 [ ( p2 + q2 ) αi + 2pqα ′ i ] ∣∣ ≤ |αi − α′i||βi − β′i|O ( log− 12 n ) |ℓ ( x ) = 0 ] ≥ 1−O ( 1 n2 ) P [ ∣∣∆i − λ ( βi − β′i ) ( p+ q ) 2 [ 2pqαi + ( p 2 + q2 ) α′i ] ∣∣ ≤ |αi − α′i||βi − β′i|O ( log− 12 n ) |ℓ ( x ) = 1 ] ≥ 1−O ( 1 n2 ) • in ( B2,2 ) : P [ ∣∣∆i − λ ( βi − β′i ) p ( p+ q ) 2 ( pαi + qα ′ i ) ∣∣ ≤ |αi − α′i||βi − β′i|O ( log− 12 n ) |ℓ ( x ) = 0 ] ≥ 1−O ( 1 n2 ) P [ ∣∣∆i − λ ( βi − β′i ) q ( p+ q ) 2 ( pαi + qα ′ i ) ∣∣ ≤ |αi − α′i||βi − β′i|O ( log− 12 n ) |ℓ ( x ) = 1 ] ≥ 1−O ( 1 n2 ) • in ( B2,3 ) : P [ ∣∣∆i − λ ( βi − β′i ) p ( p+ q ) 2 ( pαi + qα ′ i ) ∣∣ ≤ |αi − α′i||βi − β′i|O ( log− 12 n ) |ℓ ( x ) = 0 ] ≥ 1−O ( 1 n2 ) P [ ∣∣∆i − λ ( βi − β′i ) q ( p+ q ) 2 ( pαi + qα ′ i ) ∣∣ ≤ |αi − α′i||βi − β′i|O ( log− 12 n ) |ℓ ( x ) = 1 ] ≥ 1−O ( 1 n2 ) . • in ( B3 ) : P [ ∣∣∆i∣∣ ≤ |αi − α′i||βi − β′i|O ( log− 12 n ) |ℓ ( x ) = 0 or 1 ] ≥ 1−O ( 1n2 ) . Proof . The proof is similar as Theorem A.1 A.3 “ HARMLESS TYPE ” NEURONS We have the following estimation of ∆i in “ harmless type ” . Theorem A.3 ( concentration of output from “ harmless type ” neurons ) . If the i-th neuron is of “ harmless type ” , we have : • in ( H1 ) : P [ ∣∣∆i − λ ( βi − β′i ) p ( p+ q ) 2 ( pαi + qα ′ i ) ∣∣ ≤ ( αi − α′i ) ( βi − β′i ) O ( log− 12 n ) |ℓ ( x ) = 0 ] ≥ 1−O ( 1 n2 ) P [ ∣∣∆i − λ ( βi − β′i ) q ( p+ q ) 2 ( pαi + qα ′ i ) ∣∣ ≤ ( αi − α′i ) ( βi − β′i ) O ( log− 12 n ) |ℓ ( x ) = 1 ] ≥ 1−O ( 1 n2 ) • in ( H2 ) : ∆i = 0 for both ℓ ( x ) = 0 and 1 . A.4 SEPARATION OF OUTPUT Previous subsections have shown the concentration of ∆i for each type of neurons . For the i-th neuron , we write the concentrated value as mi0 if ℓ ( x ) = 0 and m i 1 if ℓ ( x ) = 1 . From Theorem A.1 , A.2 and A.3 , we have the following result about the value of mi0 − mi1 by straightforward computation . Corollary A.4 . We have the following result about mi0 −mi1 for 1 ≤ i ≤ h : • if the i-the neuron is of “ good type ” : in ( G1 ) and ( G2,1 ) : mi0 −mi1 = λ|αi − α′i||βi − β′i| ( p− q p+ q ) 2 in ( G2,2 ) : mi0 −mi1 = λ|βi − β′i||pαi + qα′i| p− q ( p+ q ) 2 ≥ λ 2 |αi − α′i||βi − β′i| ( p− q p+ q ) 2 in ( G2,3 ) : mi0 −mi1 = λ|βi − β′i||pαi + qα′i| p− q ( p+ q ) 2 ≥ λ|αi − α′i||βi − β′i| ( p− q ) Λ3 ( p+ q ) 2 , where Λ3 = ( 1−log− 1 3 n ) p2−q2 ( 1−log− 1 3 n ) p+q . • if the i-the neuron is of “ bad type ” : in ( B1 ) and ( B2,1 ) : mi0 −mi1 = −λ|αi − α′i||βi − β′i| ( p− q p+ q ) 2 in ( B2,2 ) : mi0 −mi1 = −λ|βi − β′i||pαi + qα′i| p− q ( p+ q ) 2 ≥ −λ|αi − α′i||βi − β′i| ( p− q ) Λ3 ( p+ q ) 2 in ( B2,3 ) : mi0 −mi1 = −λ|βi − β′i||pαi + qα′i| p− q ( p+ q ) 2 ≥ −λ|αi − α′i||βi − β′i| ( p− q ) Λ1 ( p+ q ) 2 in ( B3 ) : mi0 −mi1 = 0 , where Λ1 = ( 1+log− 1 3 n ) p2−q2 ( 1+log− 1 3 n ) p+q , Λ3 = ( 1−log− 1 3 n ) p2−q2 ( 1−log− 1 3 n ) p+q . • if the i-the neuron is of “ harmless type ” : in ( H1 ) : mi0 −mi1 = λ|βi − β′i||pαi + qα′i| p− q ( p+ q ) 2 ≥ λ|αi − α′i||βi − β′i| ( p− q ) Λ5 ( p+ q ) 2 in ( H2 ) : mi0 −mi1 = 0 , where Λ5 = pq log − 1 3 p+ ( 1+log− 1 3 ) q . B DYNAMICS OF PARAMETERS We consider the cross-entropy loss in training . The loss on a particular vertex x is L ( x ) = − logOℓ ( x ) ( x ) , where O0 ( x ) and O1 ( x ) are the first and second component of the output respectively , i.e . O0 ( x ) = exp ( g0 ( x ) ) exp ( g0 ( x ) ) + exp ( g1 ( x ) ) , O1 ( x ) = exp ( g1 ( x ) ) exp ( g0 ( x ) ) + exp ( g1 ( x ) ) . For a given graph G generated by SBM , we set the objective function L ( G ) as the average loss over all the vertices with revealed labels2 , i.e . L ( G ) = 1 # { x ∈ G : ℓ ( x ) is revealed } ∑ x : ℓ ( x ) revealed L ( x ) . We first show the partial derivatives of parameters . Theorem B.1 ( derivatives of parameters ) . For 1 ≤ i ≤ h , let x be a vertex , ℓ ( x ) its true label , L ( x ) = − logOℓ ( x ) ( x ) , then ∂L ∂αi = 4 n2 ( p+ q ) 2 ( βi − β′i ) Z ( −1 ) 1−ℓ ( x ) ∑ y 1 [ y ∼ x ] 1 [ αity0 + α′it y 1 ≥ 0 ] t y 0 ∂L ∂α′i = 4 n2 ( p+ q ) 2 ( βi − β′i ) Z ( −1 ) 1−ℓ ( x ) ∑ y 1 [ y ∼ x ] 1 [ αity0 + α′it y 1 ≥ 0 ] t y 1 ∂L ∂βi = 4 n2 ( p+ q ) 2 Z ( −1 ) 1−ℓ ( x ) ∑ y 1 [ y ∼ x ] ϕ ( αity0 + α′it y 1 ) ∂L ∂β′i = − 4 n2 ( p+ q ) 2 Z ( −1 ) 1−ℓ ( x ) ∑ y 1 [ y ∼ x ] ϕ ( αity0 + α′it y 1 ) ∂L ∂b0 = ( −1 ) 1−ℓ ( x ) Z ∂L ∂b1 = ( −1 ) ℓ ( x ) Z , where Z = exp ( g1−ℓ ( x ) ( x ) ) exp ( g0 ( x ) ) +exp ( g1 ( x ) ) , t y 0 and t y 1 are the numbers of neighbors of y ( including perhaps y itself ) with revealed labels in class 0 and class 1 respectively . Proof . We compute ∂L∂αi , ∂L ∂βi and ∂L∂b0 , others can be computed symmetrically . We have L ( x ) = − logOℓ ( x ) ( x ) = log ( exp ( g0 ( x ) ) + exp ( g1 ( x ) ) ) − gℓ ( x ) ( x ) , since Oj ( x ) = exp ( gj ( x ) ) exp ( g0 ( x ) ) +exp ( g1 ( x ) ) , j = 0 , 1 . So ∂L ∂αi = eg0 ( x ) ∂g0 ( x ) ∂αi + e g1 ( x ) ∂g1 ( x ) ∂αi eg0 ( x ) + eg1 ( x ) − ∂gℓ ( x ) ( x ) ∂αi = ( −1 ) 1−ℓ ( x ) Z ( ∂g0 ( x ) ∂αi − ∂g1 ( x ) ∂αi ) . Since gj ( x ) = fj ( x ) + bj , ∂gj ( x ) ∂αi = ∂fj ( x ) ∂αi , j = 0 , 1 . By ( 2 ) ∂f0 ( x ) ∂αi = 4 n2 ( p+ q ) 2 ∑ y 1 [ y ∼ x ] βi1 [ αity0 + α′it y 1 ≥ 0 ] t y 0 ∂f1 ( x ) ∂αi = 4 n2 ( p+ q ) 2 ∑ y 1 [ y ∼ x ] β′i1 [ αit y 0 + α ′ it y 1 ≥ 0 ] t y 0 . Therefore ∂L ∂αi = 4 n2 ( p+ q ) 2 ( −1 ) 1−ℓ ( x ) Z ( βi − β′i ) ∑ y 1 [ y ∼ x ] 1 [ αity0 + α′it y 1 ≥ 0 ] t y 0 . 2We abuse the notation L for L ( x ) and L ( G ) , but the meaning is clear from the context . Next we compute ∂L∂βi . Similar as above , ∂L ∂βi = ( −1 ) 1−ℓ ( x ) Z ( ∂f0 ( x ) ∂βi − ∂f1 ( x ) ∂βi ) . By ( 2 ) ∂f0 ( x ) ∂βi = 4 n2 ( p+ q ) 2 ∑ y 1 [ y ∼ x ] ϕ ( αity0 + α′it y 1 ) ∂f1 ( x ) ∂αi = 0 . So ∂L ∂βi = 4 n2 ( p+ q ) 2 ( −1 ) 1−ℓ ( x ) Z ∑ y 1 [ y ∼ x ] ϕ ( αity0 + α′it y 1 ) . Lastly , ∂L ∂b0 = ( −1 ) 1−ℓ ( x ) Z ( ∂g0 ( x ) ∂b0 − ∂g1 ( x ) ∂b0 ) = ( −1 ) 1−ℓ ( x ) Z , since ∂g0 ( x ) ∂b0 = 1 , ∂g1 ( x ) ∂b0 = 0 . In the following , we will use Theorem B.1 to analyze the dynamics of neurons of each type . As we can see , all of ∂L∂αi , ∂L ∂α′i , ∂L∂βi and ∂L ∂β′i have the form Y Z . In order to estimate these derivatives , we show the concentration of Y and Z respectively . To estimate the concentration of Z , we need the concentration of output obtained in Section 4 . For any ϵ > 0 , we require the probability of concentration in ( 3 ) to be at least 1− ϵ̃ , where ϵ̃ = o ( ϵ ) . In particular , if we choose ϵ̃ = ϵ2 , then we set 1−O ( 1n ) ≥ 1− ϵ 2 , i.e . n ≥ Ω ( 1 ϵ ) 2 . ( 5 ) Our following analysis will be based on this condition . Meanwhile in order to have balanced performance in each epoch of coordinate descent , we require∣∣E [ ∂L∂b0 ] ∣∣ < ϵ̃2 . Since E [ ∂L∂b0 ] = 12 ( −E [ Z|ℓ ( x ) = 0 ] + E [ Z|ℓ ( x ) = 1 ] ) ) , we have∣∣E [ Z|ℓ ( x ) = 0 ] − E [ Z|ℓ ( x ) = 1 ] ∣∣ < ϵ̃ . ( 6 ) We have the following relation between µ0 and µ1 . In the following , σ represents the sigmoid function : σ ( x ) = 11+e−x . Proposition B.2 . If ∣∣E [ Z|ℓ ( x ) = 0 ] −E [ Z|ℓ ( x ) = 1 ] ∣∣ < ϵ̃ , then |σ ( −µ0 ) −σ ( µ1 ) | ≤ σ′ ( µ0− δ ) δ+ σ′ ( µ1 + δ ) δ + 3ϵ̃ , where δ is as shown in Corollary 4.2 . Proof . We have Z = { σ ( −∆ ) , ℓ ( x ) = 0 σ ( ∆ ) , ℓ ( x ) = 1 For ℓ ( x ) = 0 , by Lagrange mean value theorem , |σ ( −µ0 ) −Z| = |σ ( −µ0 ) − σ ( −∆ ) | = σ′ ( ξ ) ( ∆− µ0 ) , where ξ is between −µ0 and −∆ . By Corollary 4.2 and the condition of n , |∆− µ0| ≤ δ with probability ≥ 1 − ϵ̃ . From the remark following Corollary A.4 , we have σ′ ( ξ ) ≤ σ′ ( −µ0 + δ ) = σ′ ( µ0 − δ ) ,3 with probability ≥ 1− ϵ̃ . Then we have P [ |σ ( −µ0 ) − Z| ≤ σ′ ( µ0 − δ ) δ|ℓ ( x ) = 0 ] ≥ 1− ϵ̃ . Since E [ Z|ℓ ( x ) = 0 ] = E [ Z||σ ( −µ0 ) − Z| ≤ σ′ ( µ0 − δ ) δ ] P [ |σ ( −µ0 ) − Z| ≤ σ′ ( µ0 − δ ) δ ] + E [ Z||σ ( −µ0 ) − Z| > σ′ ( µ0 − δ ) δ ] P [ |σ ( −µ0 ) − Z| > σ′ ( µ0 − δ ) δ ] , 3σ′ ( x ) is even . then ( note that 0 < Z < 1 ) E [ Z|ℓ ( x ) = 0 ] ≤ σ ( −µ0 ) + σ′ ( µ0 − δ ) δ + ϵ̃ and E [ Z|ℓ ( x ) = 0 ] ≥ ( σ ( −µ0 ) − σ′ ( µ0 − δ ) δ ) ( 1− ϵ̃ ) , i.e . E [ Z|ℓ ( x ) = 0 ] − σ ( −µ0 ) ≤ σ′ ( µ0 − δ ) δ + ϵ̃ and E [ Z|ℓ ( x ) = 0 ] − σ ( −µ0 ) ≥ −σ′ ( µ0 − δ ) δ − ϵ̃ ( σ ( −µ0 ) − σ′ ( µ0 − δ ) δ ) = −σ′ ( µ0 − δ ) δ ( 1− ϵ̃ ) − ϵ̃σ ( −µ0 ) ≥ −σ′ ( µ0 − δ ) δ − ϵ̃ . So ∣∣E [ Z|ℓ ( x ) = 0 ] − σ ( −µ0 ) ∣∣ ≤ σ′ ( µ0 − δ ) δ + ϵ̃ . Similarly ∣∣E [ Z|ℓ ( x ) = 1 ] − σ ( µ1 ) ∣∣ ≤ σ′ ( µ1 + δ ) δ + ϵ̃ . By triangle inequality , ∣∣σ ( −µ0 ) − σ ( µ1 ) ∣∣ ≤ ∣∣σ ( −µ0 ) − E [ Z|ℓ ( x ) = 0 ] ∣∣+ ∣∣E [ Z|ℓ ( x ) = 0 ] − E [ Z|ℓ ( x ) = 1 ] ∣∣ + ∣∣E [ Z|ℓ ( x ) = 1 ] − σ ( µ1 ) ∣∣ ≤ σ′ ( µ0 − δ ) δ + σ′ ( µ1 + δ ) δ + 3ϵ̃ . From the proof above , we can directly obtain the following corollary about Z. Corollary B.3 . P [ ∣∣Z − E [ Z|ℓ ( x ) = 0 ] ∣∣ ≤ 2σ′ ( µ0 − δ ) δ + ϵ̃|ℓ ( x ) = 0 ] ≥ 1− ϵ̃ P [ ∣∣Z − E [ Z|ℓ ( x ) = 1 ] ∣∣ ≤ 2σ′ ( µ1 + δ ) δ + ϵ̃|ℓ ( x ) = 1 ] ≥ 1− ϵ̃ . In order to obtain the concentration of Z , we need to estimate σ′ ( µ0 − δ ) δ and σ′ ( µ1 + δ ) δ . The following proposition is based on the condition that |µ0 + µ1| ≥ 4δ . If |µ0 + µ1| < 4δ , set c : = µ0 − µ1 , we have µ0 > c2 − 2δ and µ1 < − c 2 + 2δ . Then the concentration of output shown in Corollary 4.2 can guarantee 1− ϵ accuracy of the model for any ϵ > 0 . In fact , from |∆− µ0| < δ , we have ∆ > µ0 − δ > c2 − 3δ > 0 , due to δ = o ( c ) . So P [ ∆ > 0|ℓ ( x ) = 0 ] ≥ P [ |∆− µ0| < δ|ℓ ( x ) = 0 ] ≥ 1− ϵ̃ . Similarly , P [ ∆ < 0|ℓ ( x ) = 1 ] ≥ P [ |∆− µ0| < δ|ℓ ( x ) = 1 ] ≥ 1− ϵ̃ . Since ϵ̃ = o ( ϵ ) , the model achieves overall accuracy ≥ 1− ϵ . Proposition B.4 . If |µ0 + µ1| ≥ 4δ , then σ′ ( µ0 − δ ) δ = O ( ϵ̃ ) , σ′ ( µ1 + δ ) δ = O ( ϵ̃ ) . Proof . First , we estimate the lower bound of |σ ( −µ0 ) − σ ( µ1 ) | via the Fundamental Theorem of Calculus . We have |σ ( −µ0 ) − σ ( µ1 ) | = ∣∣ ∫ µ1 −µ0 σ ′ ( t ) dt ∣∣ . If −µ0 < µ1 < 0 , since µ0 + µ1 ≥ 4δ , we divide the interval [ −µ0 , µ1 ] into [ −µ0 , −µ0 + 2δ ] ∪ [ −µ0 + 2δ , µ1 − 2δ ] ∪ [ µ1 − 2δ , µ1 ] and estimate the lower bound of the integral . Since σ′ ( x ) is increasing on ( −∞ , 0 ] , we have∫ µ1 −µ0 σ′ ( t ) dt ≥ σ′ ( −µ0 ) · 2δ + I1 + σ′ ( µ1 − 2δ ) · 2δ , ( 7 ) where I1 = ∫ µ1−2δ −µ0+2δ σ ′ ( t ) dt . If µ1 < −µ0 < 0 , similarly we have∫ −µ0 µ1 σ′ ( t ) dt ≥ σ′ ( µ1 ) · 2δ + I2 + σ′ ( −µ0 − 2δ ) · 2δ , ( 8 ) where I2 = ∫ −µ0−2δ µ1+2δ σ′ ( t ) dt . We have a uniform lower bound from ( 7 ) and ( 8 ) : ∣∣∣∣ ∫ µ1 −µ0 σ′ ( t ) dt ∣∣∣∣ ≥ σ′ ( −µ0 − 2δ ) · 2δ + σ′ ( µ1 − 2δ ) · 2δ + I , ( 9 ) where I = min { I1 , I2 } . Furthermore , by Proposition B.2 , |σ ( −µ0 ) − σ ( µ1 ) | ≤ σ′ ( µ0 − δ ) δ + σ′ ( µ1 + δ ) δ + 3ϵ̃ . ( 10 ) Combine ( 9 ) and ( 10 ) : 2σ′ ( −µ0 − 2δ ) δ + 2σ′ ( µ1 − 2δ ) δ ≤ σ′ ( −µ0 + δ ) δ + σ′ ( µ1 + δ ) δ + 3ϵ̃ . ( 11 ) By Lagrange mean value theorem , σ′ ( −µ0 − 2δ ) = σ′ ( −µ0 + δ ) − 3σ′′ ( ξ0 ) δ σ′ ( µ1 − 2δ ) = σ′ ( µ1 + δ ) − 3σ′′ ( ξ1 ) δ , where ξ0 ∈ ( −µ0 − 2δ , −µ0 + δ ) , ξ1 ∈ ( µ1 − 2δ , µ1 + δ ) . Plug these into ( 11 ) : σ′ ( µ0 − δ ) δ + σ′ ( µ1 + δ ) δ − 6δ2 ( σ′′ ( ξ0 ) + σ′′ ( ξ1 ) ) ≤ 3ϵ̃ . Since δ2 ( σ′′ ( ξ0 ) + σ′′ ( ξ1 ) ) = o ( σ′ ( µ0 − δ ) δ ) and o ( σ′ ( µ1 + δ ) δ ) , we have σ′ ( µ0 − δ ) δ = O ( ϵ̃ ) σ′ ( µ1 + δ ) δ = O ( ϵ̃ ) . Combine Proposition B.4 and Corollary B.3 , we have the following concentration of Z . Proposition B.5 . P [ ∣∣Z − E [ Z|ℓ ( x ) = 0 ] ∣∣ ≤ O ( ϵ̃ ) |ℓ ( x ) = 0 ] ≥ 1− ϵ̃ P [ ∣∣Z − E [ Z|ℓ ( x ) = 1 ] ∣∣ ≤ O ( ϵ̃ ) |ℓ ( x ) = 1 ] ≥ 1− ϵ̃ . Under the condition of balanced performance , we have the following corollary about the concentration of Z independent of the label of x. Corollary B.6 . If ∣∣E [ ∂L∂b0 ] ∣∣ ≤ ϵ̃2 , then P [ ∣∣Z − E [ Z ] ∣∣ ≤ O ( ϵ̃ ) ] ≥ 1− ϵ̃ . Proof . Since E [ ∂L∂b0 ] = 1 2 ( E [ Z|ℓ ( x ) = 0 ] − E [ Z|ℓ ( x ) = 1 ] ) , we have∣∣E [ Z|ℓ ( x ) = 0 ] − E [ Z|ℓ ( x ) = 1 ] ∣∣ ≤ ϵ̃ . On the other hand , E [ Z ] = 1 2 ( E [ Z|ℓ ( x ) = 0 ] + E [ Z|ℓ ( x ) = 1 ] ) . So we have ∣∣E [ Z ] − E [ Z|ℓ ( x ) = 0 ] ∣∣ ≤ ϵ̃2 . By Proposition B.5 , P [ |Z − E [ Z ] | ≤ O ( ϵ̃ ) ] ≥ 1− ϵ̃ . Now we can derive the estimation of the derivatives . Theorem B.7 ( concentration of derivatives ) . For loss on the whole graph L = L ( G ) , with probability ≥ 1−O ( 1n ) , we have 4 1 . If αi > α′i > 0 or αi > 0 > α ′ i , |αiα′i | ≥ p q ( 1 + log − 13 n ) , then∣∣∣∣ ∂L∂αi + ( βi − β′i ) λ2 ( p− q p+ q ) 2 E [ Z ] ∣∣∣∣ ≤ |βi − β′i|E [ Z ] O ( log− 12 n ) ( 12 ) ∣∣∣∣ ∂L∂α′i − ( βi − β′i ) λ2 ( p− q p+ q ) 2 E [ Z ] ∣∣∣∣ ≤ |βi − β′i|E [ Z ] O ( log− 12 n ) ( 13 ) ∣∣∣∣ ∂L∂βi + ( αi − α′i ) λ2 ( p− q p+ q ) 2 E [ Z ] ∣∣∣∣ ≤ |αi − α′i|E [ Z ] O ( log− 12 n ) . ( 14 ) 4Since ∂L ∂β′i = − ∂L ∂βi ( see Theorem B.1 ) , we only need to estimate ∂L ∂βi . 2 . If αi > 0 > α′i , |αiα′i | ∈ [ q p ( 1 + γ ) , p q ( 1 − log − 13 n ) ] , where γ ∈ [ log− 1 3 n , ( pq ) 2 ( 1 − log− 1 3 n ) − 1 ] , then∣∣∣∣ ∂L∂αi + ( βi − β′i ) λp ( p− q ) 2 ( p+ q ) 2 E [ Z ] ∣∣∣∣ ≤ |βi − β′i|E [ Z ] O ( log− 12 n ) ( 15 ) ∣∣∣∣ ∂L∂α′i + ( βi − β′i ) λq ( p− q ) 2 ( p+ q ) 2 E [ Z ] ∣∣∣∣ ≤ |βi − β′i|E [ Z ] O ( log− 12 n ) ( 16 ) ∣∣∣∣ ∂L∂βi + λ ( p− q ) ( pαi + qα ′ i ) 2 ( p+ q ) 2 E [ Z ] ∣∣∣∣ ≤ |αi − α′i|E [ Z ] O ( log− 12 n ) . ( 17 ) 3 . If αi > 0 > α′i , |αiα′i | ∈ ( p q ( 1− log − 13 n ) , pq ( 1 + log − 13 n ) ) , then ∂L ∂βi ∈ [ − ( αi − α′i ) E [ Z ] ( λ ( p− q ) Λ1 2 ( p+ q ) 2 +O ( log− 1 2 n ) ) , − ( αi − α′i ) E [ Z ] ( λ ( p− q ) ( Λ3 − Λ2 ) 2 ( p+ q ) 2 −O ( log− 1 2 n ) ) ] , ( 18 ) where Λ1 = ( 1+log− 1 3 n ) p2−q2 ( 1+log− 1 3 n ) p+q , Λ2 = pq log− 1 3 n ( 1+log− 1 3 n ) p+q and Λ3 = ( 1−log− 1 3 n ) p2−q2 ( 1−log− 1 3 n ) p+q ; • if βi > β′i , ∂L ∂αi ∈ [ − ( βi − β′i ) E [ Z ] ( λp ( p− q ) 2 ( p+ q ) 2 +O ( log− 1 2 n ) ) , − ( βi − β′i ) E [ Z ] ( λ 2 ( p− q p+ q ) 2 −O ( log− 1 2 n ) ) ] ( 19 ) ∂L ∂α′i ∈ [ − ( βi − β′i ) E [ Z ] ( λq ( p− q ) 2 ( p+ q ) 2 +O ( log− 1 2 n ) ) , ( βi − β′i ) E [ Z ] ( λ 2 ( p− q p+ q ) 2 +O ( log− 1 2 n ) ) ] ( 20 ) ∂L ∂αi − ∂L ∂α′i ∈ [ − ( βi − β′i ) E [ Z ] ( λ ( p− q p+ q ) 2 +O ( log− 1 2 n ) ) , − ( βi − β′i ) E [ Z ] ( λ 2 ( p− q p+ q ) 2 −O ( log− 1 2 n ) ) ] , ( 21 ) • if βi ≤ β′i , ∂L ∂αi ∈ [ − ( βi − β′i ) E [ Z ] ( λ 2 ( p− q p+ q ) 2 −O ( log− 1 2 n ) ) , − ( βi − β′i ) E [ Z ] ( λp ( p− q ) 2 ( p+ q ) 2 +O ( log− 1 2 n ) ) ] ( 22 ) ∂L ∂α′i ∈ [ − ( βi − β′i ) E [ Z ] ( λ 2 ( p− q p+ q ) 2 +O ( log− 1 2 n ) ) , ( βi − β′i ) E [ Z ] ( λq ( p− q ) 2 ( p+ q ) 2 +O ( log− 1 2 n ) ) ] ( 23 ) ∂L ∂αi − ∂L ∂α′i ∈ [ − ( βi − β′i ) E [ Z ] ( λ 2 ( p− q p+ q ) 2 −O ( log− 1 2 n ) ) , − ( βi − β′i ) E [ Z ] ( λ ( p− q p+ q ) 2 +O ( log− 1 2 n ) ) ] ( 24 ) 4 . If αi > 0 > α′i , ∣∣αi α′i ∣∣ ≤ qp ( 1 + log− 13 n ) , βi < β′i , then ∂L ∂αi − ∂L ∂α′i ≥ −|βi − β′i|O ( ϵ̃ ) ( 25 ) ∂L ∂βi ≤ O ( ϵ̃ ) . ( 26 ) Proof . We show the proof for item 1 , other items can be proved similarly . Since L ( G ) is the average of the losses over revealed vertices , we first show the concentration of ∂L ( x ) ∂αi , then we show the concentration of ∂L ( G ) ∂αi using union bound . Since ∂L ( x ) ∂αi = ( −1 ) 1−ℓ ( x ) 4 ( βi − β′i ) Z ∑ y 1 [ y ∼ x ] 1 [ αity0 + α′it y 1 ≥ 0 ] t y 0 n2 ( p+ q ) 2 , we first show the concentration of Y : = ( −1 ) 1−ℓ ( x ) ∑ y∼x 41 [ αit y 0+α ′ it y 1≥0 ] t y 0 n2 ( p+q ) 2 using the method of averaged bounded difference . Similar as the proof of Theorem A.1 , let Yj = ( −1 ) 1−ℓ ( x ) 41 [ αit yj 0 +α ′ it yj 1 ≥0 ] t yj 0 n2 ( p+q ) 2 . Based on Condition ( Cond ) , for ℓ ( x ) = 0 , |Yj + 2λp n ( p+q ) 2 | ≤ O ( log− 7 2 n ) for ℓ ( yj ) = 0 . Similar results hold for ℓ ( yj ) = 1 , ℓ ( x ) = 1 . So for any ak , a′k , ∣∣∣∣E [ ∑ j Yj |Y1 , · · · , Yk−1 , Yk = ak ] − E [ ∑ j Yj |Y1 , · · · , Yk−1 , Yk = a′k ] ∣∣∣∣ ≤ ( αi − α′i ) O ( log− 72 n ) . By method of averaged bounded difference , for ℓ ( x ) = 0 , P [ ∣∣∣∣ ∑ ℓ ( yj ) =0 Yj + λ ( p p+ q ) 2∣∣∣∣ ≤ O ( log− 12 n ) ] ≥ 1− exp ( −2 log3 n ) ≥ 1− 1n2 . Similarly P [ ∣∣∣∣ ∑ ℓ ( yj ) =1 Yj + λ ( q p+ q ) 2∣∣∣∣ ≤ O ( log− 12 n ) ] ≥ 1− 1n2 . Hence P [ ∣∣∣∣Y + λ ( p2 + q2 ) ( p+ q ) 2 ∣∣∣∣ ≤ O ( log− 12 n ) ] ≥ 1− 1n2 . By Corollary B.6 , P [ |Z − E [ Z ] | ≤ O ( ϵ̃ ) ] ≥ 1− ϵ̃ , so we have P [ ∣∣∣∣∂L ( x ) ∂αi + ( βi − β′i ) λ p 2 + q2 ( p+ q ) 2 E [ Z ] ∣∣∣∣ ≤ |βi − β′i|E [ Z ] O ( log− 12 n ) |ℓ ( x ) = 0 ] ≥ 1−O ( 1n2 ) . For ℓ ( x ) = 1 , similarly we have P [ ∣∣∣∣∂L ( x ) ∂αi − ( βi − β′i ) λ 2pq ( p+ q ) 2E [ Z ] ∣∣∣∣ ≤ |βi − β′i|E [ Z ] O ( log− 12 n ) |ℓ ( x ) = 0 ] ≥ 1−O ( 1n2 ) . By union bound , we have ( 12 ) . ( 13 ) and ( 14 ) can be proved similarly . Using Theorem B.7 , we can analyze dynamics of neurons of each type . First , we introduce some notations . Let ηk denote the learning rate at the k-th epoch , Z ( k ) be the value of Z at the k-th epoch , α ( k ) i be the value of αi at the k-th epoch , similar for α ′ ( k ) i , β ( k ) i and β ′ ( k ) i . In particular , α ( 0 ) i , α ′ ( 0 ) i , β ( 0 ) i and β ′ ( 0 ) i represent the values at initialization . B.1 “ GOOD TYPE ” NEURONS In this section , we show that “ good type ” neurons stay in the “ good type ” regime throughout coordinate descent ( Theorem B.8 ) using Theorem B.7 . Theorem B.8 . “ Good type ” neurons are preserved in the “ good type ” throughout coordinate descent with probability ≥ 1−O ( 1n2 ) over the SBM randomness . Proof . As shown in Section 4 , “ good type ” regime is composed of ( G1 ) and ( G2 ) , we show the dynamics of neurons in ( G1 ) and ( G2 ) respectively . Assume that neuron ( α ( k ) i , α ′ ( k ) i , β ( k ) i , β ′ ( k ) i ) is in ( G1 ) , we show that it either stays in ( G1 ) or moves into ( G2 ) throughout coordinate descent . In fact , by ( 14 ) , with probability ≥ 1− O ( 1n2 ) , ∂L ∂β ( k ) i < 0 < ∂L ∂β ′ ( k ) i , so β ( k+1 ) i > β ( k ) i , β ′ ( k+1 ) i < β ′ ( k ) i and hence β ( k+1 ) i − β ′ ( k+1 ) i > β ( k ) i − β ′ ( k ) i > 0 . By ( 12 ) and ( 13 ) , ∂L ∂α ( k ) i < 0 < ∂L ∂α ′ ( k ) i , so α ( k+1 ) i > α ( k ) i , α ′ ( k+1 ) i < α ′ ( k ) i . If α ′ ( k+1 ) i > 0 , this neuron stays in ( G1 ) . If α ′ ( k+1 ) i < 0 , since∣∣∣∣ α ( k+1 ) i α ′ ( k+1 ) i ∣∣∣∣ = ∣∣∣∣∣ α ( k ) i − ηk ∂L∂α ( k ) i α ′ ( k ) i − ηk ∂L∂α′ ( k ) i ∣∣∣∣∣ > 1 , the neuron moves into ( G2 ) . Assume that neuron is in ( G2 ) , we also show that it either moves into ( G1 ) or stays in ( G2 ) . As shown in section 3.2 , ( G2 ) = ( G2,1 ) ∪ ( G2,2 ) ∪ ( G2,3 ) . If the neuron is in ( G2,1 ) , again by ( 12 ) , ( 13 ) and ( 14 ) , α ( k+1 ) i > α ( k ) i > 0 > α ′ ( k ) i > α ′ ( k+1 ) i , ∣∣ α ( k+1 ) i α ′ ( k+1 ) i ∣∣ > 1 , β ( k+1 ) i > β ( k ) i > β′ ( k ) i > β′ ( k+1 ) i , so the neuron stays in G2,2 . If the neuron is in G2,2 , by ( 15 ) , ( 16 ) and ( 17 ) , ∂L ∂β ( k ) i < 0 < ∂L ∂β ′ ( k ) i , so β ( k+1 ) i > β ′ ( k+1 ) i . Also , ∂L ∂α ( k ) i < ∂L ∂α ′ ( k ) i < 0 , so α ( k+1 ) i > α ′ ( k+1 ) i , ∣∣ α ( k+1 ) i α ′ ( k+1 ) i ∣∣ > ∣∣ α ( k ) i α ′ ( k ) i ∣∣ > 1 . If α′ ( k+1 ) i < 0 , the neuron stays in G2 . If α ′ ( k+1 ) i > 0 , it moves into G1 . If the neuron is in G2,3 , by ( 18 ) and ( 21 ) , ∂L ∂β ( k ) i < 0 < ∂L ∂β ′ ( k ) i , ∂L ∂α ( k ) i − ∂L ∂α ′ ( k ) i < 0 , so β ( k+1 ) i > β ( k ) i > β ′ ( k ) i > β ′ ( k+1 ) i , α ( k+1 ) i − α ′ ( k+1 ) i > α ( k ) i − α ′ ( k ) i > 0 . By ( 19 ) and ( 20 ) , ∂L ∂αi ≤ − ( βi − β′i ) E [ Z ] ( λ 2 ( p− q p+ q ) 2 −O ( log− 1 2 n ) ) , ∂L ∂α′i ≤ ( βi − β′i ) E [ Z ] ( λ 2 ( p− q p+ q ) 2 +O ( log− 1 2 n ) ) . Similar as in ( G2,2 ) , if α ′ ( k+1 ) i < 0 , the neuron stays in ( G2 ) . If α ′ ( k+1 ) i > 0 , it moves into ( G1 ) . B.2 “ BAD TYPE ” NEURONS As shown in Section 4 , neurons of “ bad type ” consist of two cases : B1 and B2 , where B2 = B2,1∪ B2,2 ∪ B2,3 ∪ B3 . Since the output in B3 is concentrated at 0 ( see Theorem A.2 ) , we don ’ t need to worry if neurons move into this region . Neurons in B1 ∪ B2,1 ∪ B2,2 ∪ B2,3 might exit “ bad type ” regime and become “ harmless ” or “ good ” ( if the neuron becomes order-aligned ) , which will do no harm to the performance of the model . If they stay in B1 ∪ B2,1 ∪ B2,2 ∪ B2,3 , the following theorem shows that the separation mi0 −mi1 can be upper bounded by initialization . In fact , Theorem A.4 shows that mi0 −mi1 is proportional to |αi − α′i||βi − β′i| . The next theorem shows that both |αi−α′i| and |βi−β′i| shrink throughout coordinate descent . The worst situation is that the magnitude of |αi − α′i| and |βi − β′i| of neurons in B3 increase and move into B1 or B2 at certain epoch . From Theorem B.7 we see that the magnitude can only increase by a limited rate ( we can see this more explicitly in Theorem 6.2 ) . Theorem B.9 . If ( α ( k ) i , α ′ ( k ) i , β ( k ) i , β ′ ( k ) i ) is in B1 ∪ B2,1 ∪ B2,2 ∪ B2,3 then with probability ≥ 1−O ( 1n2 ) over the SBM randomness , ∣∣α ( k+1 ) i −α′ ( k+1 ) i ∣∣ ≤ ∣∣α ( k ) i −α′ ( k ) i ∣∣ , ∣∣β ( k+1 ) i −β′ ( k+1 ) i ∣∣ ≤∣∣β ( k ) i − β′ ( k ) i ∣∣ . Proof . In B1 and B2,1 , by ( 12 ) and ( 13 ) , ∂L ∂α ( k ) i > 0 > ∂L ∂α ′ ( k ) i , then α ( k+1 ) i < α ( k ) i , α ′ ( k+1 ) i > α ′ ( k ) i , so |α ( k+1 ) i − α ′ ( k+1 ) i | ≤ |α ( k ) i − α ′ ( k ) i | . Similarly , by ( 14 ) , ∂L∂β ( k ) i = − ∂L ∂β ′ ( k ) i < 0 , so |β ( k+1 ) i − β ′ ( k+1 ) i | ≤ |β ( k ) i − β ′ ( k ) i | ( Note that α ( k ) i > α ′ ( k ) i , β ( k ) i < β ′ ( k ) i ) . In B2,2 , from ( 15 ) and ( 16 ) , we have ∂L ∂α ( k ) i > ∂L ∂α ′ ( k ) i > 0 , so |α ( k+1 ) i − α ′ ( k+1 ) i | ≤ |α ( k ) i − α ′ ( k ) i | . On the other hand , ∂L ∂β ( k ) i < 0 < ∂L ∂β ′ ( k ) i , so β ( k+1 ) i > β ( k ) i , β ′ ( k+1 ) i < β ′ ( k ) i and |β ( k+1 ) i − β ′ ( k+1 ) i | ≤ |β ( k ) i − β ′ ( k ) i | . In B2,3 , by ( 24 ) , ∂L∂α ( k ) i − ∂L ∂α ′ ( k ) i > 0 , so |α ( k+1 ) i −α ′ ( k+1 ) i | ≤ |α ( k ) i −α ′ ( k ) i | . By ( 18 ) , ∂L ∂β ( k ) i < 0 < ∂L ∂β ′ ( k ) i , so |β ( k+1 ) i − β ′ ( k+1 ) i | ≤ |β ( k ) i − β ′ ( k ) i | . B.3 “ HARMLESS TYPE ” NEURONS Section 4 shows that there are two cases of “ harmless type ” : H1 and H2 . For neurons in H1 , the derivatives of parameters are estimated in ( 15 ) , ( 16 ) and ( 17 ) ( same as in G2,2 ) . We can have similar analysis as in G2,2 and show that the inequality αi > 0 > α′i , βi > β ′ i can be preserved . Moreover∣∣αi α′i ∣∣ increases . So the neurons either stay in H1 or become “ good type ” if ∣∣αiα′i ∣∣ > 1 . In particular , neurons in H1 do no harm to the performance of the model . For neurons in H2 , 1 [ αit y 0 + α ′ it y 1 ≥ 0 ] = 0 , so the derivatives are all equal to 0 . Therefore they are never updated . Meanwhile they don ’ t affect the performance of the model since ϕ ( αit y 0 + α ′ it y 1 ) = 0 and ∆i = 0 . C LEARNING GUARANTEE In this section , we prove Theorem 6.1 , 6.2 and Lemma 6.3 . Proof of Theorem 6.1 . We prove by contradiction . Suppose P [ ∆ < 0|ℓ ( x ) = 0 ] ≥ 4ϵ , ( 27 ) then E [ Z|ℓ ( x ) = 0 ] = E [ Z|ℓ ( x ) = 0 , ∆ < 0 ] P [ ∆ < 0|ℓ ( x ) = 0 ] + E [ Z|ℓ ( x ) = 0 , ∆ ≥ 0 ] P [ ∆ ≥ 0|ℓ ( x ) = 0 ] ≥ 1 2 · 4ϵ = 2ϵ . ( 28 ) Furthermore , we claim that µ0 < δ . In fact , if µ0 ≥ δ , since P [ |∆− µ0| ≤ δ|ℓ ( x ) = 0 ] ≥ 1− ϵ by Corollary 4.2 , and ∆ ≥ µ0 − δ ≥ 0 , we have P [ ∆ ≥ 0|ℓ ( x ) = 0 ] ≥ P [ |∆− µ0| ≤ δ|ℓ ( x ) = 0 ] ≥ 1− ϵ , i.e . P [ ∆ < 0|ℓ ( x ) = 0 ] ≤ ϵ , which contradicts ( 27 ) . Let c : = µ0 − µ1 , then µ1 = µ0 − c < δ − c. Again , by Corollary 4.2 , for ℓ ( x ) = 1 , ∆ < µ1 + δ with probability ≥ 1− ϵ , we have Z = σ ( ∆ ) < σ ( µ1 + δ ) < σ ( −c+ 2δ ) . Then E [ Z|ℓ ( x ) = 1 ] = E [ Z|ℓ ( x ) = 1 , |∆− µ1| < δ ] P [ |∆− µ1| < δ|ℓ ( x ) = 1 ] + E [ Z|ℓ ( x ) = 1 , |∆− µ1| ≥ δ ] P [ |∆− µ1| ≥ δ|ℓ ( x ) = 1 ] < σ ( −c+ 2δ ) · 1 + 1 · ϵ < σ ( − c 2 ) + ϵ . The last step is due to δ = o ( c ) . Since σ ( − c2 ) < ϵ 2 , E [ Z|ℓ ( x ) = 1 ] < 3ϵ 2 . Combine with ( 28 ) , ∣∣E [ Z|ℓ ( x ) = 0 ] − E [ Z|ℓ ( x ) = 1 ] ∣∣ > ϵ 2 . On the other hand , ∣∣E [ ∂L∂b0 ] ∣∣ < ϵ4 implies∣∣E [ Z|ℓ ( x ) = 0 ] − E [ Z|ℓ ( x ) = 1 ] ∣∣ < ϵ 2 , which is a contradiction . So P [ ∆ < 0|ℓ ( x ) = 0 ] < 4ϵ . Similarly , P [ ∆ > 0|ℓ ( x ) = 1 ] < 4ϵ . Proof of Theorem 6.2 . If the i-th neuron is of “ good type ” , from Corollary A.4 , we find a uniform lower bound of mi0 −mi1 in “ good type ” regimes . We have min { p−q 2 , Λ3 } = p−q 2 . Next we estimate αi − α′i and βi − β′i . Let A ( k ) i : = α ( k ) i − α ′ ( k ) i , B ( k ) i : = β ( k ) i − β ′ ( k ) i . We have A ( k ) i = α ( k ) i − α ′ ( k ) i = α ( k−1 ) i − α ′ ( k−1 ) i − ηk ( ∂L ∂α ( k−1 ) i − ∂L ∂α ′ ( k−1 ) i ) B ( k ) i = β ( k ) i − β ′ ( k ) i = β ( k−1 ) i − β ′ ( k−1 ) i − ηk ( ∂L ∂β ( k−1 ) i − ∂L ∂β ′ ( k−1 ) i ) , By Theorem B.7 , in G1 and G2,1 , with probability ≥ 1−O ( 1n ) , ∂L ∂αi − ∂L ∂α′i ≤ − ( βi − β′i ) E [ Z ] ( ( p− q p+ q ) 2 λ−O ( log− 1 2 n ) ) ) ≤ − ( βi − β′i ) E [ Z ] λ 2 ( p− q p+ q ) 2 ∂L ∂βi − ∂L ∂β′i ≤ − ( αi − α′i ) E [ Z ] ( ( p− q p+ q ) 2 λ−O ( log− 1 2 n ) ) ≤ − ( αi − α′i ) E [ Z ] λ 2 ( p− q p+ q ) 2 so A ( k ) i = A ( k−1 ) i − ηk ( ∂L ∂α ( k−1 ) i − ∂L ∂α ′ ( k−1 ) i ) ≥ A ( k−1 ) i + ηkE [ Z ( k ) ] λ 2 ( p− q p+ q ) 2 B ( k−1 ) i = A ( k−1 ) i + λ 2 ( p− q p+ q ) 2 B ( k−1 ) i B ( k ) i = B ( k−1 ) i − ηk ( ∂L ∂β ( k−1 ) i − ∂L ∂β ′ ( k−1 ) i ) ≥ B ( k−1 ) i + ηkE [ Z ( k ) ] λ 2 ( p− q p+ q ) 2 A ( k−1 ) i = B ( k−1 ) i + λ 2 ( p− q p+ q ) 2 A ( k−1 ) i . In matrix form : ( A ( k ) i B ( k ) i ) ⪰ ( 1 λ2 ( p−q p+q ) 2 λ 2 ( p−q p+q ) 2 1 ) ( A ( k−1 ) i B ( k−1 ) i ) ( 29 ) Similarly , in G2,2 : ( A ( k ) i B ( k ) i ) ⪰ ( 1 λ4 ( p−q p+q ) 2 λ 8 ( p−q p+q ) 2 1 ) ( A ( k−1 ) i B ( k−1 ) i ) ( 30 ) in G2,3 : ( A ( k ) i B ( k ) i ) ⪰ ( 1 λ4 ( p−q p+q ) 2 λ ( Λ3−Λ2 ) 2 ( p−q ) ( p−q p+q ) 2 1 ) ( A ( k−1 ) i B ( k−1 ) i ) ( 31 ) where Λ2 = pq log − 1 3 n ( 1+log− 1 3 n ) p+q , Λ3 = ( 1−log− 1 3 n ) p2−q2 ( 1−log− 1 3 n ) p+q . A uniform relation among ( 29 ) , ( 30 ) and ( 31 ) can be given by ( 30 ) . By eigenvalue decomposition , we have A ( k ) i B ( k ) i ≥ 1 4 ( 1 + √ 2λ 8 ( p− q p+ q ) 2 ) 2k ( 2− 1 8A ( 0 ) i + 2 1 8B ( 0 ) i ) 2 ≥ A ( 0 ) i B ( 0 ) i ( 1 + √ 2λ 8 ( p− q p+ q ) 2 ) 2k . Therefore we have a uniform lower bound of mi0 −mi1 at the k-th epoch in “ good type ” regime : mi0 −mi1 ≥ A ( 0 ) i B ( 0 ) i λ 2 ( p− q p+ q ) 2 ( 1 + √ 2λ 8 ( p− q p+ q ) 2 ) 2k . Next we consider the “ bad type ” regime . By Corollary A.4 , we have lower bound of mi0 −mi1 in B1 , B2 and B3 respectively . By Theorem B.9 , in B1 and B2 , |αi − α′i| and |βi − β′i| shrink . Moreover , since Λ1 > Λ35 , we have a uniform lower bound of mi0 −mi1 in B1 and B2 : mi0 −mi1 ≥ − ∣∣A ( 0 ) i B ( 0 ) i ∣∣λ ( p− q ) Λ1 ( p+ q ) 2 ≥ −∣∣A ( 0 ) i B ( 0 ) i ∣∣λp ( p− q ) ( p+ q ) 2 , since Λ1 = ( 1+log− 1 3 n ) p2−q2 ( 1+log− 1 3 n ) p+q ≤ 2p 2−q2 2p+q ≤ p. Next we show that |αi − α′i| and |βi − β′i| can only increase by a limited rate in B3 . From item 4 of Theorem B.7 , we have ∂L ∂αi − ∂L ∂α′i ≥ −|βi − β′i|O ( ϵ̃ ) ∂L ∂βi − ∂L ∂β′i = 2 ∂L ∂βi ≤ O ( ϵ̃ ) . Therefore ( note that βi < β′i ) A ( k ) i ≤ A ( k−1 ) i + ηk|B ( k−1 ) i |O ( ϵ̃ ) |B ( k ) i | ≤ |B ( k−1 ) i |+ ηkO ( ϵ̃ ) . Since E [ Z ( k ) ] ≥ Ω ( ϵ ) 6 , and ϵ̃ = o ( ϵ ) , ϵ2 = O ( 1n ) , so ηkO ( ϵ̃ ) ≤ ηkE [ Z ( k ) ] 1n = 1 n . Suppose A ( k ) i ≥ O ( 1 ) , otherwise , A ( k ) i and |B ( k ) i | increase by an even smaller rate . So we have ( A ( k ) i |B ( k ) i | ) ⪯ ( 1 1n 1 n 1 ) ( A ( k−1 ) i |B ( k−1 ) i | ) . By eigenvalue decomposition , we have |A ( k ) i B ( k ) i | ≤ 1 2 ( 1 + 1 n ) 2k ( |A ( 0 ) i |+ |B ( 0 ) i | ) 2 ≤ k ( ( A ( 0 ) i ) 2 + ( B ( 0 ) i ) 2 ) ( n → ∞ ) . We obtained the result for “ harmless type ” neurons directly from Corollary 4.3 . Proof of Lemma 6.3 . Since all parameters are independent standard normal random variables , we have E [ ( α− α′ ) 2 ] = E [ ( β − β′ ) 2 ] = 2 , Var [ ( α− α′ ) 2 ] = Var [ ( β − β′ ) 2 ] = 8 . By Chebyshev ’ s inequality we have P [ h∑ i=1 ( αi − α′i ) 2 + ( βi − β′i ) 2 ≤ 5h ] ≥ 1−O ( 1 h ) . 5 xp2−q2 xp+q is monotonically increasing . 6Otherwise , the model already achieves high accuracy , see the proof of Theorem 2.2 For neurons initialized as “ good type ” , we have E [ α− α′|α > α′ , α+ α′ > 0 ] = 2√ π Var [ α− α′|α > α′ , α+ α′ > 0 ] = 2− 1 4π E [ β − β′|β > β′ ] = 1√ π Var [ β − β′|β > β′ ] = 2− 1 π . Let ρ denote the probability that a neuron is initialized as “ good type ” . By G1 , G2 and symmetry , ρ = 2P [ α > α′ , α+ α′ > 0 , β > β′ ] . Since P [ α > α′ , α+ α′ ] = 1 4 , P [ β > β′ ] = 1 2 , we have ρ = 14 . By Chernoff bound , P [ hg ≥ ρ 2h ] ≥ 1−exp ( − ρ2 4 h ) , so P [ hg ≥ h 8 ] ≥ 1−exp ( − h 64 ) . Also by Chebyshev ’ s inequality , P [ ∑ the i-th neuron initialized as “ good type ” |αi − α′i||βi − β′i| ≥ hg ( 1 2π − k ) ∣∣hg ≥ h 8 ] ≥ 1− 4− 14π2 hgk2 . Set k = 110π , P [ ∑ the i-th neuron initialized as “ good type ” |αi − α′i||βi − β′i| ≥ h 80 ∣∣hg ≥ h 8 ] ≥ 1−O ( 1 h ) . So we have P [ ∑ the i-th neuron initialized as “ good type ” |αi − α′i||βi − β′i| ≥ h 80 ] ≥ P [ ∑ the i-th neuron initialized as “ good type ” |αi − α′i||βi − β′i| ≥ h 80 ∣∣hg ≥ h 8 ] P [ hg ≥ h 8 ] ≥ ( 1−O ( 1 h ) ) ( 1− exp ( − h 64 ) ) ≥ 1−O ( 1 h ) . D EXPERIMENTS ON DYNAMICS OF HIDDEN NEURONS This experiment verifies our argument in Sections B.1 , B.2 , B.3 and Theorem 6.2 about the dynamics of hidden neurons . We set h = 5 , λ = 0.3 and train the model on graphs sampled from SBM with n = 1000 , a = 1.0 , b = 0.7 . The plot of accuracy and its distribution can be seen in Section 7 . Here we plot the dynamics of all the 5 hidden neurons in Figure 4 , with each row corresponding to one hidden neuron . In each plot , x-axis represents epoch and y-axis represents the value of neurons . The first column depicts αi and α′i , the second column |αiα′i | , the third column |αi −α ′ i| , the fourth column βi , β ′ i and the last column |βi − β′i| . As shown in the figure , the first , second and fourth neurons are of “ good '' type satisfying ( G2 ) . Throughout training these neurons are preserved as “ good '' type : they ’ re order-aligned , |αiα′i | is lowered bounded by 1 , and both |αi − α ′ | , |βi − β ′ i| keeps increasing . All of these verify our argument in B.1 . The third neuron is “ harmless '' satisfying ( H2 ) . As shown in B.3 , this neuron isn ’ t updated and doesn ’ t make contribution to the output . The fifth neuron is of “ bad '' type satisfying ( B2 ) . Although |αi −α′i| and |βi − β′i| increase , but by comparing with the first , second and fourth row ( “ good '' neurons ) , they increase at a much smaller . This verifies our result in Theorem 6.2 . E TABLE OF NOTATIONS We list the notations used in this paper for readers ’ convenience . Notation Definition n number of vertices in a graph p probability of intra-community connection q probability of cross-community connection a parameter for p with p = a log 3 n n b parameter for q with q = b log 3 n n λ probability of revealing the label of a vertex ℓ ( x ) label of vertex x A adjacency matrix of a graph  normalized adjacency matrix with self loop  = 2n ( p+q ) ( A+ I ) X input feature of a graph W ( 0 ) trainable weights in the first layer of GCN W ( 1 ) trainable weights in the second layer of GCN B bias matrix of GCN with each row of B being [ b0 , b1 ] b0 bias in the first component b1 bias in the second component h number of hidden features f0 logit in the first component without bias f1 logit in the second component without bias g0 logit in the first component , g0 = f0 + b0 g1 logit in the second component , g1 = f1 + b1 ∆ difference between logit , ∆ = g0 − g1 = f0 − f1 + b0 − b1 | The paper presents a theoretical analysis of two-layer graph convolutional networks (GCN). The main goal is to study the behaviors of GCNs when the inputs are random graphs generated by stochastic block models. The stochastic block models are constructed by three components: the number of vertices, intra-connection probability, and cross-connection probability. The paper assumes that intra-connection probability, and cross-connection probability should be functions of the number of vertices. The paper tries to establish the error bound w.r.t the number of vertices for the case that GCNs have the number of hidden features bounded by the number of vertices. | SP:81873659b4e2f8f74ba812fd6444493ec1aea934 |
LEARNING GUARANTEES FOR GRAPH CONVOLUTIONAL NETWORKS ON THE STOCHASTIC BLOCK MODEL | 1 INTRODUCTION There is presently a large gap between what can be accomplished in practice using deep learning , and what can be satisfactorily explained and predicted by the theory of deep learning . Nevertheless , the past several years have seen substantial developments in the theory of deep learning ( Ge et al. , 2017 ; Brutzkus & Globerson , 2017 ; Zhang et al. , 2019a ; Goel et al. , 2020 ; Chen et al. , 2020a ) . One factor contributing to the gap between the theory and practice of traditional NNs is that realworld data sets tend to have complex structure that is difficult to capture with formal definitions . For example , popular image classification models are capable of memorizing arbitrary data ( Zhang et al. , 2016 ) , and yet they exhibit astonishing generalization performance on accurately-labeled natural images . Hence , any rigorous proof of the observed generalization performance of deep learning models on image classification tasks will necessarily require assumptions about the data that are sharp enough to separate random inputs from natural images . Because of the difficulty of giving an adequate characterization of real-world data , much of the recent progress in deep learning theory has instead focused on proving results using very simple ( e.g . Gaussian ) input distributions or in distribution-free settings ( Ge et al. , 2017 ; Brutzkus & Globerson , 2017 ; Zhang et al. , 2019a ; Vempala & Wilmes , 2019 ) . Compared to traditional feed-forward ( dense , convolutional , etc . ) NNs , the theory of graph neural networks ( GNNs ) is still in its infancy . On the other hand , it appears substantially easier to give plausible descriptions of the combinatorial structure of real-world graph data sets than , e.g. , to characterize the distribution of natural images ( Drobyshevskiy & Turdakov , 2019 ) . We therefore believe that GNNs offer a natural setting for developing provable guarantees that are able to capture the power of deep learning on real-world datasets . In this paper , we contribute to that goal by giving the first rigorous guarantees of efficient semi-supervised learning of stochastic block models via a GNN . 1.1 GRAPH NEURAL NETWORKS Many natural datasets for diverse machine learning problems have a graph structure , including social networks , molecular structures , and transit networks . In order to efficiently exploit such combinatorial structure , a variety of GNN models have been proposed , tuned for different kinds of tasks . A number of taxonomies of GNN models have been proposed ( Zhou et al. , 2018 ; Wu et al. , 2021 ) ; one of the most essential differences between different GNN models is whether they are meant to label the graph as a whole , or to label individual components of the graph , particularly vertices . From a theoretical perspective , the best understood tasks for GNNs concern labeling the graph as a whole , for example for the task of classifying a graph by its isomorphism type ( Sato , 2020 ) . In particular , it has been established that many GNN models are of comparable power to various versions of the Weisfeiler-Leman hierarchy1 ( Xu et al. , 2018 ; Morris et al. , 2019 ) . Some progress has also been made on the theory of GNNs for vertex-labeling tasks . Recent works by Sato et al . describe the representational power of certain GNN models for tasks such as computing minimum vertex covers ( Sato et al. , 2019 ) . Garg et al . also give bounds on the representational power of GNN models , as well as using Rademacher bounds to estimate the generalization ability of GNNs ( Garg et al. , 2020 ) . Our results concern the task of semi-supervised community detection . In this problem , each vertex belongs to one community , and some subset of the vertices are labeled according to their community membership . The task is to classify the community membership of the remaining vertices . This task has been one of the most intensively studied problems in the GNN literature , but there have not yet been any provable guarantees on the performance of proposed models . We study ( spatial-based ) graph convolutional models similar to the GCN model proposed in Kipf & Welling ( 2017 ) . A single layer of such a model computes weights at each node by aggregating the weights at neighboring nodes and applying an activation function with learned parameters , e.g. , a linear map followed by a ReLU . Many variations on this theme , including various sophisticated training regimes , have been proposed ( Chen et al. , 2017 ; Gao et al. , 2018 ; Li et al. , 2018 ; Zhang et al. , 2019b ; Chen et al. , 2018 ) , but no provable guarantees have been available for the performance of such models on natural data distributions , until the present work . 2 MAIN RESULTS One motivation for GNNs as a target for progress in deep learning theory is that there are well-studied graph distributions that plausibly capture some of the structure of real-world data ( Drobyshevskiy & Turdakov , 2019 ) . For example , even fairly simple preferential attachment models plausibly capture some of the essential structure of the web ( Kumar et al. , 2000 ) . Other graph models naturally capture community structures , the simplest of which is the Stochastic Block Model ( SBM ) ( Holland et al. , 1983 ) . A graph is sampled from a SBM by first partitioning vertices into communities ( with fixed or random sizes ) . Two vertices are connected with probability p if they belong to the same community and probability q if they belong to different communities . In this paper , we consider the case of an SBM with two equal-sized communities in which vertices have label 0 and 1 respectively . We denote the label of vertex x by ℓ ( x ) ∈ { 0 , 1 } . The graphs are parameterized as SBM ( n , p , q ) where n is the number of vertices , p is the probability of an intra-community connection , and q is the probability of a cross-community connection . We allow n to vary ( but will require it to be sufficiently large ) , while p and q are of the form p = a log 3 n n and q = b log 3 n n for some fixed constants a > b . In the semi-supervised setting , the community labels of some portion of the labels are revealed . We assume the label of each vertex is revealed independently with probability λ . The input-layer features at a vertex x is ( 0 , 0 ) if its label is not revealed , ( 1 , 0 ) if its label is revealed to be 0 , and ( 0 , 1 ) if its label is revealed to be 1 . Assumption 2.1 ( Sparse Stochastic Block Model ) . The probabilities of intra and cross-community connections are p = a log 3 n n and q = b log3 n n , where a > b are constants . We study the problem of recovering the communities from such graphs using GNN models . Of course , recovering the communities of an SBM graph has been well-studied and its computational complexity is fully understood in most cases ( Abbe & Sandon , 2015 ; Kawamoto et al. , 2019 ) . SBM models are therefore a natural test-case for understanding the power of GNN models for learning 1Weisfeiler-Leman hierarchy is a polynomial-time iterative algorithms which provides a necessary but insufficient condition for graph isomorphism . community structure , and experimental studies have been done in this setting ( Chen et al. , 2020b ; Yadav et al. , 2019 ) . ( Abbe et al. , 2014 ) shows a sharp threshold in the task of community recovery : ( √ p −√q ) √ n logn > √ 2 . This threshold clearly holds for our case ( at sufficiently large values of n ) , since p = a log 3 n n , q = b log3 n n and a > b . The contribution here is not to learn the community models . Rather it ’ s showing that ( multi-layer ) GCNs solve the classification problem , which is very much not trivial ( it is non-convex , and the training loss curve is empirically non-monotonic ) . Our GNN models will be trained on a graph or several graphs generated by the SBM ( n , p , q ) model , and seek to understand their accuracy on arbitrary SBM ( n , p , q ) graphs not necessarily in the training set but with the same parameters a , b determining p and q ( with n allowed to vary ) . In particular , we study spatial-based graph convolutional models along the lines of the Graph Convolutional Networks ( GCN ) introduced in ( Kipf & Welling , 2017 ) . Each layer of the model computes a feature vector at every vertex of an input graph based on features of nearby vertices in the previous layer . A typical layer-wise update rule is of the form X ( k+1 ) = ϕ ( ÂX ( k ) W ( k ) ) , where •  is a suitably-normalized adjacency matrix of shape n × n where n is the number of vertices . Usually  includes self-loops . • X ( k ) gives the feature vector in the k-th layer at each vertex as a matrix of shape n×mk , where mk is the number of features in layer k. • ϕ is an activation function , such as the ReLU . • W ( k ) are the trainable weights in the k-th layer , a matrix of shape mk ×mk+1 . In our version of this model , we define  = 1n 2 ( p+q ) à , where à = A+ I , A is the adjacency matrix of a given graph , and I is the identity matrix . For the given SBM ( n , p , q ) , a randomly selected vertex has n2 ( p + q ) neighbors in expectation , so  is obtained by normalizing each row of A + I with the average size of a neighborhood . Since very deep GCN models seem to provide little empirical benefit ( Li et al. , 2018 ) , we use a single hidden layer with a softmax output layer . Furthermore , we introduce a bias term b at the second layer . So the model has the following form : f ( X , A ) = softmax ( Âϕ ( ÂXW ( 0 ) ) W ( 1 ) +B ) = softmax ( 4 n2 ( p+ q ) 2 Ãϕ ( ÃXW ( 0 ) ) W ( 1 ) +B ) , ( 1 ) where X is the input feature of the graph and W ( 0 ) , W ( 1 ) and B are trainable parameters . Let h denote the number of hidden features , which equals the number of columns of W ( 0 ) and the number of rows of W ( 1 ) . We define the accuracy of the model as the probability of predicting correctly the label of a single vertex in a randomly generated SBM ( n , p , q ) graph where the label of each vertex is revealed with probability λ . We can now state our main result . Theorem 2.2 . For any ϵ > 0 and δ > 0 , given a GCN model with 1δ ≤ h ≤ n hidden features and with parameters initialized independently from N ( 0 , 1 ) , if training graphs are sampled from SBM ( n , p , q ) with n ≥ max ( Ω ( 1ϵ ) 2 , Ω ( 1δ ) ) and the label of each vertex revealed with probability λ , and if the model is trained by coordinate descent for k = O ( log log 1ϵ ) epochs , then with probability ≥ 1− δ , the model achieves accuracy ≥ 1− 4ϵ . Remark . We treat λ as constants , so it is omitted in the big O and Ω notation in the sampling and training complexity . We emphasize that the novelty of this theorem is not in learning two-class SBM models as such ; this is a long-solved problem . Instead , this is the first proof of efficient learning for a GCN on semi-supervised community detection tasks using a natural family of random graph models . 3 PRELIMINARIES In this section , we first introduce notations ( a table of notations is also shown in the appendix for readers ’ convenience ) and some interpretations . Then we introduce the structure of the paper . Given a vertex y , denote the row of ÃX corresponding to y as ( ty0 , t y 1 ) , so t y 0 and t y 1 give the numbers of neighbors of y ( including perhaps y itself ) with revealed labels in class 0 and class 1 respectively . Let W ( 0 ) = ( α1 α2 · · · αh α′1 α ′ 2 · · · α′h ) , W ( 1 ) = β1 β ′ 1 β2 β ′ 2 ... ... βh β ′ h , B = b0 b1 b0 b1 ... ... b0 b1 . Then αit y 0 + α ′ it y 1 , 1 ≤ i ≤ h gives h features of vertex y in the hidden layer . The inner product of the yth row of ϕ ( ÃXW ( 0 ) ) and the columns of W ( 1 ) gives weighted sums of features of y : ∑h i=1 βiϕ ( αit y 0 + α ′ it y 1 ) and ∑h i=1 β ′ iϕ ( αit y 0 + α ′ it y 1 ) , where ϕ represents the ReLU function . Given a vertex x , the row of Âϕ ( ÂXW ( 0 ) ) W ( 1 ) corresponding to x is denoted by ( f0 ( x ) , f1 ( x ) ) and is of the form ( 4 n2 ( p+ q ) 2 ∑ y∈G 1 [ y ∼ x ] h∑ i=1 βiϕ ( αit y 0+α ′ it y 1 ) , 4 n2 ( p+ q ) 2 ∑ y∈G 1 [ y ∼ x ] h∑ i=1 β′iϕ ( αit y 0+α ′ it y 1 ) ) , ( 2 ) where 1 [ y ∼ x ] is equal to 1 if y and x are connected , 0 otherwise . Denote f i0 ( x ) : = 4βi n2 ( p+ q ) 2 ∑ y∈G 1 [ y ∼ x ] ϕ ( αity0+α′it y 1 ) f i 1 ( x ) : = 4β′i n2 ( p+ q ) 2 ∑ y∈G 1 [ y ∼ x ] ϕ ( αity0+α′it y 1 ) , so f0 ( x ) = ∑h i=1 f i 0 ( x ) and f1 ( x ) = ∑h i=1 f i 1 ( x ) . Denote gj ( x ) : = fj ( x ) + bj , j = 0 , 1. , where ( g0 ( x ) , g1 ( x ) ) represents the logit of the model corresponding to x. Denote ∆ ( x ) : = g0 ( x ) − g1 ( x ) . In order to make correct predictions , we need ∆ ( x ) > 0 when ℓ ( x ) = 0 and ∆ ( x ) < 0 when ℓ ( x ) = 1 . The bias term B is useful in our analysis because its derivative controls how imbalanced the current loss is between the classes . In training we consider the cross-entropy loss denoted as L , and have E [ ∂L ∂b0 ] = −E [ ∂L ∂b1 ] = −1 2 ( E [ Z|ℓ ( x ) = 0 ] − E [ Z|ℓ ( x ) = 1 ] ) , where Z = exp ( g1−ℓ ( x ) ( x ) ) exp ( g0 ( x ) ) +exp ( g1 ( x ) ) . Z can be regarded as a measure of wrong prediction : the numerator is the exponential of the output corresponding to the wrong label and the denominator is a normalizer . It is easy to see that Z > 12 if the prediction is wrong ; Z < 1 2 if prediction is correct . When∣∣E [ ∂L∂b0 ] ∣∣ ≈ 0 , the model ’ s loss is balanced in the sense of that ∣∣E [ Z|ℓ ( x ) = 0 ] −E [ Z|ℓ ( x ) = 1 ] ∣∣ ≈ 0 . In order to have balanced performance in every epoch , we train the model through coordinate descent instead of conventional gradient descent . Specifically , in each epoch we first update b0 and b1 until∣∣E [ ∂L∂b0 ] ∣∣ is smaller than some threshold . Then we update the other parameters . The performance of the model depends on the concentration and separation of ∆ ( x ) for ℓ ( x ) = 0 and ℓ ( x ) = 1 respectively . In Section 4 we show that ∆ ( x ) is concentrated at one of two values , denoted by µ0 and µ1 , depending only on whether the label ℓ ( x ) is 0 or 1 . The proof depends on different parameter regimes of hidden neurons . In Section 5 , we analyze the dynamics of hidden neurons throughout training to show that the concentration and separation improve at a controlled rate . Based on this information , in Section 6 we prove the main theorem . Section 7 shows some experimental results to verify our theory . The paper ends with future directions in Section 8 . 4 CONCENTRATION AND SEPARATION OF OUTPUT The difference of the logits is ∆ ( x ) = g0 ( x ) − g1 ( x ) = f0 ( x ) − f1 ( x ) + b0 − b1 = h∑ i=1 ∆i ( x ) + b0 − b1 , where ∆i ( x ) = f i 0 ( x ) − f i1 ( x ) = 4 ( βi − β′i ) n2 ( p+ q ) 2 ∑ y∈G 1 [ y ∼ x ] ϕ ( αity0 + α′it y 1 ) . For brevity , we write ∆ ( x ) as ∆ and ∆i ( x ) as ∆i . In order to estimate ∆ , we need to estimate each ∆i , 1 ≤ i ≤ h. Our fine-grained analysis of the dynamics of coordinate descent on GCNs relies on a classification of neurons into three families based on the sign and scale of the parameters : “ good type ” , “ bad type ” and “ harmless type ” . The names also indicate whether the neuron has positive contribution to the value of µ0 − µ1 , where µ0 and µ1 are high probability estimate of ∆ ( x ) for ℓ ( x ) = 0 and 1 respectively . We show that “ good type ” neuron makes positive contribution ; the contribution of “ bad type ” neuron is negative but lower bounded ; “ harmless type ” neuron ’ s contribution is non negative ( see Corollary A.4 and the remark following it ) . We will specifically describe parameter regime of each type in the following subsections . We analyze the dynamics of these types throughout coordinate descent in the next section . First we give some definitions . Definition 1 . For 1 ≤ i ≤ h , we call ( αi , α′i , βi , β′i ) the i-th neuron of the model , where ( αi , α′i ) ⊤ is the i-th column of W ( 0 ) , ( βi , β′i ) is the i-th row of W ( 1 ) . Definition 2 . We say that the i-th neuron is order-aligned if ( αi − α′i ) ( βi − β′i ) > 0 , otherwise we say it is order-misaligned . 4.1 CLASSIFICATION OF NEURON PARAMETER REGIMES We say the i-th neuron is of “ good type ” if it satisfies either ( G1 ) or ( G2 ) below . ( There is also the symmetric case obtained by switching αi with α′i and βi with β ′ i . For brevity , we only consider the cases that αi > α′i . This applies to the “ bad ” and “ harmless ” types below as well ) . Neurons in this type are order-aligned and both αi and α′i are positive or the ratio between αi and α ′ i is large enough . αi > α ′ i > 0 and βi > β ′ i ( G1 ) αi > 0 > α ′ i , ∣∣∣∣αiα′i ∣∣∣∣ > 1 and βi > β′i ( G2 ) We say the i-th neuron is of “ bad type ” if it satisfies either ( B1 ) , ( B2 ) or ( B3 ) . Neurons in this type are order-misaligned and αi , α′i are either both positive or have the opposite signs . αi > α ′ i > 0 and βi < β ′ i ( B1 ) αi > 0 > α ′ i , ∣∣∣∣αiα′i ∣∣∣∣ > qp ( 1 + log− 13 n ) and βi < β′i ( B2 ) αi > 0 > α ′ i , ∣∣∣∣αiα′i ∣∣∣∣ ≤ qp ( 1 + log− 13 n ) ( B3 ) We say that the i-th neuron is of “ harmless type ” if it satisfies either ( H1 ) or ( H2 ) : αi > 0 > α ′ i , ∣∣∣∣αiα′i ∣∣∣∣ ∈ ( qp ( 1 + log− 13 n ) , 1 ] and βi > β′i ( H1 ) αi ≤ 0 and α′i ≤ 0 ( H2 ) 4.2 CONCENTRATION AND SEPARATION Theorem 4.1 . If the i-th neuron is of “ good type ” satisfying ( G1 ) or of “ bad type ” satisfying ( B1 ) , then for ℓ ( x ) = 0 : P [ ∣∣∆i − λ ( βi − β′i ) ( p+ q ) 2 [ ( p2 + q2 ) αi + 2pqα ′ i ] ∣∣ ≤ ( αi − α′i ) ( βi − β′i ) O ( log − 12 n ) |ℓ ( x ) = 0 ] ≥ 1−O ( 1 n2 ) , for ℓ ( x ) = 1 : P [ ∣∣∆i − λ ( βi − β′i ) ( p+ q ) 2 [ 2pqαi + ( p 2 + q2 ) α′i ] ∣∣ ≤ ( αi − α′i ) ( βi − β′i ) O ( log − 12 n ) |ℓ ( x ) = 1 ] ≥ 1−O ( 1 n2 ) . Similar concentration hold for neurons satisfying ( G2 ) , ( B2 ) and ( B3 ) , and for neurons of “ harmless type. ” We apply the method of bounded differences to show the concentration . The details are shown in the appendix . Given the concentration of ∆i for each type of neurons , we estimate the concentration of the output ∆ = ∑h i=1 ∆i + b0 − b1 . For the i-th neuron , we denote the high-probability estimate of ∆i given in the statement of Theorem 4.1 as mi0 when ℓ ( x ) = 0 and m i 1 when ℓ ( x ) = 1 . By union bound , we have the following corollary . Corollary 4.2 . Given a vertex x ∈ G with label unrevealed , we have P [ |∆− µj | ≤ δ|ℓ ( x ) = j ] ≥ 1−O ( 1 n ) , ( 3 ) where µj = ( h∑ i=1 mij ) + b0 − b1 , j = 0 , 1 δ = h∑ i=1 |αi − α′i||βi − β′i|O ( log − 12 n ) . For any ϵ > 0 , we require the probability of concentration in ( 3 ) to be at least 1− ϵ̃ , where ϵ̃ = o ( ϵ ) . If we choose ϵ̃ = ϵ2 , then we set 1−O ( 1n ) ≥ 1− ϵ 2 , i.e.n ≥ Ω ( 1ϵ ) 2 . Our following analysis will be based on this condition . From Theorem 4.1 , we have the following result about the value of mi0 −mi1 . Corollary 4.3 . • If the i-th neuron is of “ good type ” and satisfies ( G1 ) , then mi0 −mi1 = λ|αi − α′i||βi − β′i| ( p− q p+ q ) 2 . • If the i-th neuron is of “ bad type ” and satisfies ( B1 ) , then mi0 −mi1 = −λ|αi − α′i||βi − β′i| ( p− q p+ q ) 2 . • If the i-th neuron is of “ harmless type ” and satisfies ( H1 ) , then mi0 −mi1 = λ|βi − β′i||pαi + qα′i| p− q ( p+ q ) 2 . Similar results for neurons satisfying ( G2 ) , ( B2 ) , ( B3 ) and ( H1 ) are stated in the appendix , along with the proof . Remark . • As we can see from Corollary 4.3 , the value of mi0 − mi1 is positive for “ good type ” neurons , non-negative for “ harmless type ” neurons and may be negative ( but lower bounded ) for “ bad type ” neurons . Since positive values of mi0 −mi1 decrease the loss of the model , this explains the names for the types of neurons . • mi0 −mi1 is proportional to |αi −α′i||βi − β′i| . In the next section , we analyze the dynamics of the parameters αi , α′i , βi , β ′ i . Using our understanding of these dynamics , in Theorem 6.2 we present a refined result about the separation of output which only depends on the initialization of parameters . • Let c : = µ0 − µ1 = ∑h i=1 ( m i 0 −mi1 ) . By the two corollaries above , we have δ = o ( |c| ) . The balanced loss guaranteed by the bias term and the coordinate descent scheme ensure that µ0 = Ω ( c ) and µ1 = Ω ( c ) . It then follows that if the loss is sufficiently small , both µ0 and µ1 have correct sign , i.e . µ0 > 0 > µ1 . ( Otherwise , due to concentration of the output , the model makes wrong prediction and the loss is large ) . So we will eventually have δ = o ( µ0 ) and δ = o ( |µ1| ) . 5 DYNAMICS OF PARAMETERS In this section , we describe the dynamics of each type of neurons through coordinate descent , which can be visualized in the following figure in which the arrows indicate movement between types that can happen with non-negligible probability . There are two noteworthy points from this figure . First , “ good type ” parameters are preserved under coordinate descent . Second , there are no arrows coming into “ bad type ” except from itself . These dynamics are proved by estimating the gradient with respect to the loss function for each type of neuron . Because of the non-linearity of the activation , we rely heavily on the concentration result proved above to get tight estimates . Without these concentration results , even estimating the sign of the gradient seems difficult . The proof and experiments about the dynamics of hidden neurons are deferred to the appendix . 6 LEARNING GUARANTEE In this section , we prove our main result which states that with high probability a trained GCN can detect communities in SBM with any desired accuracy . The proof is based on the following theorem which shows that if µ0 and µ1 are separated enough , then the model achieves high accuracy . Theorem 6.1 . ∀ϵ > 0 , provided that the difference between µ0 and µ1 is large enough : σ ( −µ0−µ12 ) < ϵ 2 , if ∣∣E [ ∂L∂b0 ] ∣∣ < ϵ4 , then P [ ∆ < 0|ℓ ( x ) = 0 ] < 4ϵ , P [ ∆ > 0|ℓ ( x ) = 1 ] < 4ϵ , where σ ( x ) : = 11+exp ( −x ) represents the sigmoid function . Next we show that the model can achieve such separation between µ0 and µ1 through coordinate descent . In order to make constant update of parameters at every epoch , we set an adaptive learning rate ηk = 1E [ Z ( k ) ] where Z ( k ) is the value of Z at the k-th epoch . We first refine Corollary 4.3 about the separation of output for each type of neuron ( mi0 −mi1 ) using the dynamics of parameters . Theorem 6.2 ( separation of output ) . Let mi0 and mi1 be defined as in Section 4 , train the model for k epochs by the defined coordinate descent with adaptive learning rate ηk = 1E [ Z ( k ) ] , • if the i-th neuron is of “ good type ” , then mi0 −mi1 ≥ A ( 0 ) i B ( 0 ) i λ 2 ( p− q p+ q ) 2 ( 1 + √ 2λ 8 ( p− q p+ q ) 2 ) 2k • if the i-th neuron is of “ bad type ” , then mi0 −mi1 ≥ −k ( ( A ( 0 ) i ) 2 + ( B ( 0 ) i ) 2 ) λp ( p− q ) ( p+ q ) 2 , • if the i-th neuron is of “ harmless type ” , then mi0 −mi1 ≥ 0 , where A ( 0 ) i = α ( 0 ) i − α ′ ( 0 ) i , B ( 0 ) i = β ( 0 ) i − β ′ ( 0 ) i . Next we present a result about initialization , which shows that with high probability , there are enough “ good type ” neurons and parameters have appropriate scale . Lemma 6.3 . Suppose all parameters in W ( 0 ) and W ( 1 ) are initialized independently following standard normal distribution . Then the number hg of neurons initialized as “ good type ” satisfies P [ hg ≥ h8 ] ≥ 1− exp ( − h 64 ) . Furthermore , P [ h∑ i=1 ( αi−α′i ) 2+ ( βi−β′i ) 2 ≤ 5h ] ≥ 1−O ( 1 h ) , P [ ∑ the i-th neuron initialized as “ good type ” |αi−α′i||βi−β′i| ≥ h 80 ] ≥ 1−O ( 1 h ) . Now we can prove the final result . Proof of Theorem 2.2 . First we show that if the loss E [ Z ] is small enough , the model achieves desired accuracy . Indeed , if E [ Z ] < 2ϵ , since E [ Z ] = E [ Z|pred is wrong ] P [ pred is wrong ] +E [ Z|pred is correct ] P [ pred is correct ] ≥ 1 2 P [ pred is wrong ] , we have P [ pred is wrong ] ≤ 4ϵ , i.e. , P [ pred is correct ] > 1− 4ϵ . Otherwise , E [ Z ] ≥ 2ϵ , since E [ Z ] = 12 ( E [ Z|ℓ ( z ) = 0 ] + E [ Z|ℓ ( z ) = 1 ] ) , we have E [ Z|ℓ ( z ) = 0 ] +E [ Z|ℓ ( z ) = 1 ] ≥ 4ϵ . On the other hand , ∣∣E [ ∂L∂b0 ] ∣∣ < ϵ implies that ∣∣E [ Z|ℓ ( z ) = 0 ] −E [ Z|ℓ ( z ) = 1 ] ∣∣ < 2ϵ . By Theorem 6.2 , µ0 − µ1 = h∑ i=1 ( mi0 −mi1 ) = ∑ i∈ “ good ” ( mi0 −mi1 ) + ∑ i∈ “ bad ” ( mi0 −mi1 ) + ∑ i∈ “ harmless ” ( mi0 −mi1 ) ≥ λ 2 ( p− q p+ q ) 2 ( 1 + √ 2λ 8 ( p− q p+ q ) 2 ) 2k ∑ i∈ “ good ” A ( 0 ) i B ( 0 ) i − k λp ( p− q ) ( p+ q ) 2 ∑ i∈ “ bad ” ( ( A ( 0 ) i ) 2 + ( B ( 0 ) i ) 2 ) . By Lemma 6.3 , with probability ≥ 1−O ( 1h ) , ∑ i∈ “ good ” A ( 0 ) i B ( 0 ) i ≥ h 80 , ∑ i∈ “ bad ” ( ( A ( 0 ) i ) 2 + ( B ( 0 ) i ) 2 ) ≤ 5h . Since h ≥ 1δ , then with probability ≥ 1− δ , µ0 − µ1 ≥ h ( λ 160 ( p− q p+ q ) 2 ( 1 + √ 2λ 8 ( p− q p+ q ) 2 ) 2k − k 5λp ( p− q ) ( p+ q ) 2 ) ≥ h ( C1 ( 1 + C2 ) 2k − C3k ) , ( 4 ) where C1 , C2 and C3 are constants determined by p , q and λ . By Theorem 6.1 , if ( 4 ) ≥ 2 log 2ϵ ( then σ ( − µ0−µ1 2 ) ≤ ϵ 2 ) , then the model achieves accuracy ≥ 1− 4ϵ . It ’ s sufficient to have C1 ( 1 + C2 ) 2k − C3k ≥ 2 log 2 ϵ , i.e . k = O ( log log 1ϵ ) . 7 EXPERIMENTS We show some experiments verifying Theorem 2.2 . In particular , our experiments demonstrate that accuracy increases with n , the probability of high-accuracy models increases with h , and coordinate descent is able to recovery high-accuracy models in the sparse regime of Assumption 2.1 . Additional plots demonstrating the dynamics of hidden neurons with their ratios and differences can be seen in the appendix . Experiment 1 In this experiment , we plot the an estimate of the accuracy versus epoch for varying n. The parameters p , q of SBM follow Assumption 2.1 , where we choose a = 1.0 and b = 0.7 . We set h = 20 , λ = 0.3 and run 40 independent experiments for n = 250 , 500 and 1000 respectively . In each experiment we train the model for 100 epochs . The training set has 40 randomly generated graphs from SBM ( n , p , q ) . We validate the performance by the percentage of correct predictions on 200 random vertices , each from a randomly generated graph . The result is shown Figure 2 . The shaded region for each n is obtained from the max , min and mean percentage of the 40 experiments . The result verifies Theorem 2.2 which shows that the accuracy of the model increases with n. Experiment 2 In this experiment , we show the effect of the number of hidden neurons h. The parameters of SBM are the same as Experiment 1 . We set h = 2 , 5 , 20 . For each pair of ( n , h ) we run 40 independent experiments and show the distribution of validation in Figure 3 . From the top row to the bottom , n increases from 250 to 1000 . From the left column to the right , h increases from 2 to 20 . In each plot , the x-axis represents the accuracy , while y-axis represents the count of experiments . According to Theorem 2.2 , the probability of achieving high accuracy is 1−O ( 1/h ) and the accuracy increases with n. We can see that in each row of Figure 3 , as h increases , we have lager probability to achieve high accuracy ; in each column , as n increases , the model achieves higher accuracy . The results verify our theory in the paper . 8 FUTURE DIRECTIONS Graph neural networks offer a promising setting for progress on the more general theory of deep learning , because random graph models more plausibly capture the structure of real-world data compared to , e.g. , the Gaussian inputs often used to prove deep learning guarantees for traditional feed-forward neural networks . This paper has initiated the project of proving training guarantees for semi-supervised learning using GCNs on SBM models , but much more work remains to be done . Arguably the sparsest SBM models ( expected constant degree ) are the most compelling from the perspective of modeling real-world communities , so it would be interesting to extend these results to that setting . Models with more than two blocks , or overlapping communities ( Petti & Vempala , 2018 ) would be even closer to real-world structure . We hope this initial step spurs further interest in provable guarantees for training neural networks using plausible models of real-world data as the input distribution . REFERENCES Emmanuel Abbe and Colin Sandon . Recovering communities in the general stochastic block model without knowing the parameters . arXiv preprint arXiv:1506.03729 , 2015 . Emmanuel Abbe , Afonso S. Bandeira , and Georgina Hall . Exact recovery in the stochastic block model , 2014 . Alon Brutzkus and Amir Globerson . Globally optimal gradient descent for a convnet with gaussian inputs . In International conference on machine learning , pp . 605–614 . PMLR , 2017 . Jianfei Chen , Jun Zhu , and Le Song . Stochastic training of graph convolutional networks with variance reduction . arXiv preprint arXiv:1710.10568 , 2017 . Jie Chen , Tengfei Ma , and Cao Xiao . Fastgcn : fast learning with graph convolutional networks via importance sampling . arXiv preprint arXiv:1801.10247 , 2018 . Sitan Chen , Adam R. Klivans , and Raghu Meka . Learning deep relu networks is fixed-parameter tractable , 2020a . Zhengdao Chen , Xiang Li , and Joan Bruna . Supervised community detection with line graph neural networks , 2020b . Mikhail Drobyshevskiy and Denis Turdakov . Random graph modeling : A survey of the concepts . ACM Comput . Surv. , 52 ( 6 ) :1–36 , December 2019 . Hongyang Gao , Zhengyang Wang , and Shuiwang Ji . Large-scale learnable graph convolutional networks . In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining , pp . 1416–1424 , 2018 . Vikas Garg , Stefanie Jegelka , and Tommi Jaakkola . Generalization and representational limits of graph neural networks . In International Conference on Machine Learning , pp . 3419–3430 . PMLR , 2020 . Rong Ge , Jason D. Lee , and Tengyu Ma . Learning one-hidden-layer neural networks with landscape design . CoRR , abs/1711.00501 , 2017 . Surbhi Goel , Aravind Gollakota , Zhihan Jin , Sushrut Karmalkar , and Adam Klivans . Superpolynomial lower bounds for learning one-layer neural networks using gradient descent . In International Conference on Machine Learning , pp . 3587–3596 . PMLR , 2020 . Paul W Holland , Kathryn Blackmond Laskey , and Samuel Leinhardt . Stochastic blockmodels : First steps . Social networks , 5 ( 2 ) :109–137 , 1983 . Tatsuro Kawamoto , Masashi Tsubaki , and Tomoyuki Obuchi . Mean-field theory of graph neural networks in graph partitioning . Journal of Statistical Mechanics : Theory and Experiment , 2019 ( 12 ) : 124007 , dec 2019. doi : 10.1088/1742-5468/ab3456 . URL https : //doi.org/10.1088/ 1742-5468/ab3456 . Thomas N. Kipf and Max Welling . Semi-supervised classification with graph convolutional networks , 2017 . R. Kumar , P. Raghavan , S. Rajagopalan , D. Sivakumar , A. Tomkins , and E. Upfal . Stochastic models for the web graph . In Proceedings 41st Annual Symposium on Foundations of Computer Science , pp . 57–65 , 2000. doi : 10.1109/SFCS.2000.892065 . Qimai Li , Zhichao Han , and Xiao-Ming Wu . Deeper insights into graph convolutional networks for semi-supervised learning . In Proceedings of the AAAI Conference on Artificial Intelligence , volume 32 , 2018 . Christopher Morris , Martin Ritzert , Matthias Fey , William L Hamilton , Jan Eric Lenssen , Gaurav Rattan , and Martin Grohe . Weisfeiler and leman go neural : Higher-order graph neural networks . In Proceedings of the AAAI Conference on Artificial Intelligence , volume 33 , pp . 4602–4609 , 2019 . Samantha Petti and Santosh S. Vempala . Approximating sparse graphs : The random overlapping communities model , 2018 . Ryoma Sato . A survey on the expressive power of graph neural networks , 2020 . Ryoma Sato , Makoto Yamada , and Hisashi Kashima . Approximation ratios of graph neural networks for combinatorial problems . arXiv preprint arXiv:1905.10261 , 2019 . Santosh Vempala and John Wilmes . Gradient descent for one-hidden-layer neural networks : Polynomial convergence and sq lower bounds . In Conference on Learning Theory , pp . 3115–3117 . PMLR , 2019 . Z. Wu , S. Pan , F. Chen , G. Long , C. Zhang , and P. S. Yu . A comprehensive survey on graph neural networks . IEEE Transactions on Neural Networks and Learning Systems , 32 ( 1 ) :4–24 , 2021 . Keyulu Xu , Weihua Hu , Jure Leskovec , and Stefanie Jegelka . How powerful are graph neural networks ? arXiv preprint arXiv:1810.00826 , 2018 . Prateek Yadav , Madhav Nimishakavi , Naganand Yadati , Shikhar Vashishth , Arun Rajkumar , and Partha Talukdar . Lovasz convolutional networks . In The 22nd International Conference on Artificial Intelligence and Statistics , pp . 1978–1987 . PMLR , 2019 . Chiyuan Zhang , Samy Bengio , Moritz Hardt , Benjamin Recht , and Oriol Vinyals . Understanding deep learning requires rethinking generalization . arXiv preprint arXiv:1611.03530 , 2016 . Xiao Zhang , Yaodong Yu , Lingxiao Wang , and Quanquan Gu . Learning one-hidden-layer relu networks via gradient descent . In The 22nd International Conference on Artificial Intelligence and Statistics , pp . 1524–1534 . PMLR , 2019a . Yingxue Zhang , Soumyasundar Pal , Mark Coates , and Deniz Ustebay . Bayesian graph convolutional neural networks for semi-supervised classification . Proceedings of the AAAI Conference on Artificial Intelligence , 33 ( 01 ) :5829–5836 , 2019b . Jie Zhou , Ganqu Cui , Zhengyan Zhang , Cheng Yang , Zhiyuan Liu , Lifeng Wang , Changcheng Li , and Maosong Sun . Graph neural networks : A review of methods and applications . arXiv preprint arXiv:1812.08434 , 2018 . A CONCENTRATION AND SEPARATION OF OUTPUT Let N ( x , 0 ) , N ( x , 1 ) denote the neighborhood of vertex x with label 0 and 1 respectively , i.e . N ( x , 0 ) = { y ∈ G , y ∼ x , ℓ ( y ) = 0 } , N ( x , 1 ) = { y ∈ G , y ∼ x , ℓ ( y ) = 1 } . By the definition of SBM , both |N ( x , 0 ) | and |N ( x , 1 ) | are binomial random variables . For ℓ ( x ) = 0 , |N ( x , 0 ) | ∼ B ( n2 , p ) , |N ( x , 1 ) | ∼ B ( n 2 , q ) and for ℓ ( x ) = 1 , |N ( x , 0 ) | ∼ B ( n 2 , q ) , |N ( x , 1 ) | ∼ B ( n 2 , p ) . Moreover , ty0 and t y 1 are also binomial random variables , for ℓ ( y ) = 0 , t y 0 ∼ B ( nλ2 , p ) , t y 1 ∼ B ( nλ2 , q ) , similarly for ℓ ( y ) = 1 . Our following analysis is based on the condition that |N ( x , 0 ) | , |N ( x , 1 ) | , tx0 and tx1 are in their high probability range for all x ∈ G. Specifically we require the condition that for all ℓ ( x ) = 0 , ( similar conditions for ℓ ( x ) = 1 are omitted ) : ∣∣∣∣|N ( x , 0 ) | − np2 ∣∣∣∣ ≤ O ( np ) 56 , ∣∣∣∣|N ( x , 1 ) | − nq2 ∣∣∣∣ ≤ O ( nq ) 56 ; ( Cond ) ∣∣∣∣tx0 − nλp2 ∣∣∣∣ ≤ O ( np ) 56 , ∣∣∣∣tx1 − nλq2 ∣∣∣∣ ≤ O ( nq ) 56 . By tail bound of binomial random variables and union bound , we have P [ ( Cond ) ] ≥ 1− 1 n2 . Under this condition , we show the concentration of ∆i for each type . A.1 “ GOOD TYPE ” NEURONS For convenience , according to the activation pattern of ϕ ( αit y 0 + α ′ it y 1 ) , we further divide ( G2 ) into subcases ( G2,1 ) , ( G2,2 ) and ( G2,3 ) by to the ratio of ∣∣αi α′i ∣∣ . For example , in ( G1 ) and ( G2,1 ) , ϕ ( αit y 0 + α ′ it y 1 ) is active for both ℓ ( y ) = 0 and ℓ ( y ) = 1 ; in ( G2,1 ) , it is only active for ℓ ( y ) = 0.∣∣∣∣αiα′i ∣∣∣∣ > pq ( 1 + log− 13 n ) ( G2,1 ) 1 < ∣∣∣∣αiα′i ∣∣∣∣ < pq ( 1− log− 13 n ) ( G2,2 ) p q ( 1− log− 1 3 n ) ≤ ∣∣∣∣αiα′i ∣∣∣∣ ≤ pq ( 1 + log− 13 n ) ( G2,3 ) We have the following estimation of ∆i in “ good type ” . Theorem A.1 ( concentration of output from “ good type ” neurons ) . If the i-th neuron is of “ good type ” , then • in both ( G1 ) and ( G2,1 ) : P [ ∣∣∆i− λ ( βi − β′i ) ( p+ q ) 2 [ ( p2+ q2 ) αi+2pqα ′ i ] ∣∣ ≤ ( αi−α′i ) ( βi−β′i ) O ( log− 12 n ) |ℓ ( x ) = 0 ] ≥ 1−O ( 1 n2 ) P [ ∣∣∆i− λ ( βi − β′i ) ( p+ q ) 2 [ 2pqαi+ ( p 2+ q2 ) α′i ] ∣∣ ≤ ( αi−α′i ) ( βi−β′i ) O ( log− 12 n ) |ℓ ( x ) = 1 ] ≥ 1−O ( 1 n2 ) • in ( G2,2 ) : P [ ∣∣∆i − λ ( βi − β′i ) p ( p+ q ) 2 ( pαi + qα ′ i ) ∣∣ ≤ ( αi − α′i ) ( βi − β′i ) O ( log− 12 n ) |ℓ ( x ) = 0 ] ≥ 1−O ( 1 n2 ) P [ ∣∣∆i − λ ( βi − β′i ) q ( p+ q ) 2 ( pαi + qα ′ i ) ∣∣ ≤ ( αi − α′i ) ( βi − β′i ) O ( log− 12 n ) |ℓ ( x ) = 1 ] ≥ 1−O ( 1 n2 ) • in ( G2,3 ) : P [ ∣∣∆i − λ ( βi − β′i ) p ( p+ q ) 2 ( pαi + qα ′ i ) ∣∣ ≤ ( αi − α′i ) ( βi − β′i ) O ( log− 12 n ) |ℓ ( x ) = 0 ] ≥ 1−O ( 1 n2 ) P [ ∣∣∆i − λ ( βi − β′i ) q ( p+ q ) 2 ( pαi + qα ′ i ) ∣∣ ≤ ( αi − α′i ) ( βi − β′i ) O ( log− 12 n ) |ℓ ( x ) = 1 ] ≥ 1−O ( 1 n2 ) . Proof . We have ∆i = ( βi − β′i ) ∑ y∈G 1 [ y ∼ x ] 4ϕ ( αit y 0+α ′ it y 1 ) n2 ( p+q ) 2 . We apply the method of averaged bounded difference [ book ] to estimate ∆i . In different parameter regimes , ϕ ( αit y 0 + α ′ it y 1 ) has different activation patterns . In ( G1 ) and ( G2,1 ) , ϕ ( αit y 0 + α ′ it y 1 ) is active with probability 1 − O ( 1n2 ) for both ℓ ( y ) = 0 and ℓ ( y ) = 1 . For ℓ ( x ) = 0 , at first we estimate E [ ∆i ] . By condition ( Cond ) : ∣∣∣∣E [ ∆i ] − λ ( βi − β′i ) ( p+ q ) 2 [ ( p2 + q2 ) αi + 2pqα′i ] ∣∣∣∣ ≤ ( αi − α′i ) ( βi − β′i ) O ( log−1 n ) . Let Yj = 4ϕ ( αit yj 0 +α ′ it yj 1 ) n2 ( p+q ) 2 , then ∆i = ( βi − β ′ i ) ∑ j Yj . Based on condition ( Cond ) , ∣∣Yj − 2λ ( pαi+qα ′ i ) n ( p+q ) 2 ∣∣ ≤ ( αi − α′i ) O ( log− 72 n ) for ℓ ( yj ) = 0 . For any ak , a′k , ∣∣∣∣E [ Yk|Y1 , · · · , Yk−1 , Yk = ak ] − E [ Yk|Y1 , · · · , Yk−1 , Yk = a′k ] ∣∣∣∣ ≤ ( αi − α′i ) O ( log− 72 n ) . Moreover , when the number of vertices with revealed labels are fixed , ∣∣∣∣E [ Yj |Y1 , · · · , Yk−1 , Yk = ak ] − E [ Yj |Y1 , · · · , Yk−1 , Yk = a′k ] ∣∣∣∣ ≤ ( αi − α′i ) O ( log−6 n ) , for j ≥ k. By condition ( Cond ) , there are at most O ( log3 n ) non-zero terms for Yk , 1 ≤ k ≤ n. So∣∣∣∣E [ ∑ j Yj |Y1 , · · · , Yk−1 , Yk = ak ] − E [ ∑ j Yj |Y1 , · · · , Yk−1 , Yk = a′k ] ∣∣∣∣ ≤ ( αi − α′i ) O ( log− 72 n ) , for 1 ≤ k ≤ n. By the method of averaged bounded difference , we have P [ ∣∣∆i−λ ( βi − β′i ) ( p+ q ) 2 [ ( p2+q2 ) αi+2pqα ′ i ] ∣∣ ≤ ( αi−α′i ) ( βi−β′i ) O ( log n ) − 12 |ℓ ( x ) = 0 ] ≥ 1−O ( 1n2 ) . Other regimes can be proved similarly . A.2 “ BAD TYPE ” NEURONS For convenience of our analysis , we further divide ( B2 ) into subcases ( B2,1 ) , ( B2,2 ) and ( B2,3 ) according to the ratio of ∣∣αi α′i ∣∣ ∣∣∣∣αiα′i ∣∣∣∣ > pq ( 1 + log− 13 n ) ( B2,1 ) ∣∣∣∣αiα′i ∣∣∣∣ ∈ ( qp ( 1 + log− 13 n ) , pq ( 1− log− 13 n ) ] ( B2,2 ) ∣∣∣∣αiα′i ∣∣∣∣ ∈ ( pq ( 1− log− 13 n ) , pq ( 1 + log− 13 n ) ] ( B2,3 ) We have the following estimation of ∆i in “ bad type ” . Theorem A.2 ( concentration of output from “ bad type ” neurons ) . If the i-th neuron is of “ bad type ” , we have : • in ( B1 ) and ( B2,1 ) : P [ ∣∣∆i − λ ( βi − β′i ) ( p+ q ) 2 [ ( p2 + q2 ) αi + 2pqα ′ i ] ∣∣ ≤ |αi − α′i||βi − β′i|O ( log− 12 n ) |ℓ ( x ) = 0 ] ≥ 1−O ( 1 n2 ) P [ ∣∣∆i − λ ( βi − β′i ) ( p+ q ) 2 [ 2pqαi + ( p 2 + q2 ) α′i ] ∣∣ ≤ |αi − α′i||βi − β′i|O ( log− 12 n ) |ℓ ( x ) = 1 ] ≥ 1−O ( 1 n2 ) • in ( B2,2 ) : P [ ∣∣∆i − λ ( βi − β′i ) p ( p+ q ) 2 ( pαi + qα ′ i ) ∣∣ ≤ |αi − α′i||βi − β′i|O ( log− 12 n ) |ℓ ( x ) = 0 ] ≥ 1−O ( 1 n2 ) P [ ∣∣∆i − λ ( βi − β′i ) q ( p+ q ) 2 ( pαi + qα ′ i ) ∣∣ ≤ |αi − α′i||βi − β′i|O ( log− 12 n ) |ℓ ( x ) = 1 ] ≥ 1−O ( 1 n2 ) • in ( B2,3 ) : P [ ∣∣∆i − λ ( βi − β′i ) p ( p+ q ) 2 ( pαi + qα ′ i ) ∣∣ ≤ |αi − α′i||βi − β′i|O ( log− 12 n ) |ℓ ( x ) = 0 ] ≥ 1−O ( 1 n2 ) P [ ∣∣∆i − λ ( βi − β′i ) q ( p+ q ) 2 ( pαi + qα ′ i ) ∣∣ ≤ |αi − α′i||βi − β′i|O ( log− 12 n ) |ℓ ( x ) = 1 ] ≥ 1−O ( 1 n2 ) . • in ( B3 ) : P [ ∣∣∆i∣∣ ≤ |αi − α′i||βi − β′i|O ( log− 12 n ) |ℓ ( x ) = 0 or 1 ] ≥ 1−O ( 1n2 ) . Proof . The proof is similar as Theorem A.1 A.3 “ HARMLESS TYPE ” NEURONS We have the following estimation of ∆i in “ harmless type ” . Theorem A.3 ( concentration of output from “ harmless type ” neurons ) . If the i-th neuron is of “ harmless type ” , we have : • in ( H1 ) : P [ ∣∣∆i − λ ( βi − β′i ) p ( p+ q ) 2 ( pαi + qα ′ i ) ∣∣ ≤ ( αi − α′i ) ( βi − β′i ) O ( log− 12 n ) |ℓ ( x ) = 0 ] ≥ 1−O ( 1 n2 ) P [ ∣∣∆i − λ ( βi − β′i ) q ( p+ q ) 2 ( pαi + qα ′ i ) ∣∣ ≤ ( αi − α′i ) ( βi − β′i ) O ( log− 12 n ) |ℓ ( x ) = 1 ] ≥ 1−O ( 1 n2 ) • in ( H2 ) : ∆i = 0 for both ℓ ( x ) = 0 and 1 . A.4 SEPARATION OF OUTPUT Previous subsections have shown the concentration of ∆i for each type of neurons . For the i-th neuron , we write the concentrated value as mi0 if ℓ ( x ) = 0 and m i 1 if ℓ ( x ) = 1 . From Theorem A.1 , A.2 and A.3 , we have the following result about the value of mi0 − mi1 by straightforward computation . Corollary A.4 . We have the following result about mi0 −mi1 for 1 ≤ i ≤ h : • if the i-the neuron is of “ good type ” : in ( G1 ) and ( G2,1 ) : mi0 −mi1 = λ|αi − α′i||βi − β′i| ( p− q p+ q ) 2 in ( G2,2 ) : mi0 −mi1 = λ|βi − β′i||pαi + qα′i| p− q ( p+ q ) 2 ≥ λ 2 |αi − α′i||βi − β′i| ( p− q p+ q ) 2 in ( G2,3 ) : mi0 −mi1 = λ|βi − β′i||pαi + qα′i| p− q ( p+ q ) 2 ≥ λ|αi − α′i||βi − β′i| ( p− q ) Λ3 ( p+ q ) 2 , where Λ3 = ( 1−log− 1 3 n ) p2−q2 ( 1−log− 1 3 n ) p+q . • if the i-the neuron is of “ bad type ” : in ( B1 ) and ( B2,1 ) : mi0 −mi1 = −λ|αi − α′i||βi − β′i| ( p− q p+ q ) 2 in ( B2,2 ) : mi0 −mi1 = −λ|βi − β′i||pαi + qα′i| p− q ( p+ q ) 2 ≥ −λ|αi − α′i||βi − β′i| ( p− q ) Λ3 ( p+ q ) 2 in ( B2,3 ) : mi0 −mi1 = −λ|βi − β′i||pαi + qα′i| p− q ( p+ q ) 2 ≥ −λ|αi − α′i||βi − β′i| ( p− q ) Λ1 ( p+ q ) 2 in ( B3 ) : mi0 −mi1 = 0 , where Λ1 = ( 1+log− 1 3 n ) p2−q2 ( 1+log− 1 3 n ) p+q , Λ3 = ( 1−log− 1 3 n ) p2−q2 ( 1−log− 1 3 n ) p+q . • if the i-the neuron is of “ harmless type ” : in ( H1 ) : mi0 −mi1 = λ|βi − β′i||pαi + qα′i| p− q ( p+ q ) 2 ≥ λ|αi − α′i||βi − β′i| ( p− q ) Λ5 ( p+ q ) 2 in ( H2 ) : mi0 −mi1 = 0 , where Λ5 = pq log − 1 3 p+ ( 1+log− 1 3 ) q . B DYNAMICS OF PARAMETERS We consider the cross-entropy loss in training . The loss on a particular vertex x is L ( x ) = − logOℓ ( x ) ( x ) , where O0 ( x ) and O1 ( x ) are the first and second component of the output respectively , i.e . O0 ( x ) = exp ( g0 ( x ) ) exp ( g0 ( x ) ) + exp ( g1 ( x ) ) , O1 ( x ) = exp ( g1 ( x ) ) exp ( g0 ( x ) ) + exp ( g1 ( x ) ) . For a given graph G generated by SBM , we set the objective function L ( G ) as the average loss over all the vertices with revealed labels2 , i.e . L ( G ) = 1 # { x ∈ G : ℓ ( x ) is revealed } ∑ x : ℓ ( x ) revealed L ( x ) . We first show the partial derivatives of parameters . Theorem B.1 ( derivatives of parameters ) . For 1 ≤ i ≤ h , let x be a vertex , ℓ ( x ) its true label , L ( x ) = − logOℓ ( x ) ( x ) , then ∂L ∂αi = 4 n2 ( p+ q ) 2 ( βi − β′i ) Z ( −1 ) 1−ℓ ( x ) ∑ y 1 [ y ∼ x ] 1 [ αity0 + α′it y 1 ≥ 0 ] t y 0 ∂L ∂α′i = 4 n2 ( p+ q ) 2 ( βi − β′i ) Z ( −1 ) 1−ℓ ( x ) ∑ y 1 [ y ∼ x ] 1 [ αity0 + α′it y 1 ≥ 0 ] t y 1 ∂L ∂βi = 4 n2 ( p+ q ) 2 Z ( −1 ) 1−ℓ ( x ) ∑ y 1 [ y ∼ x ] ϕ ( αity0 + α′it y 1 ) ∂L ∂β′i = − 4 n2 ( p+ q ) 2 Z ( −1 ) 1−ℓ ( x ) ∑ y 1 [ y ∼ x ] ϕ ( αity0 + α′it y 1 ) ∂L ∂b0 = ( −1 ) 1−ℓ ( x ) Z ∂L ∂b1 = ( −1 ) ℓ ( x ) Z , where Z = exp ( g1−ℓ ( x ) ( x ) ) exp ( g0 ( x ) ) +exp ( g1 ( x ) ) , t y 0 and t y 1 are the numbers of neighbors of y ( including perhaps y itself ) with revealed labels in class 0 and class 1 respectively . Proof . We compute ∂L∂αi , ∂L ∂βi and ∂L∂b0 , others can be computed symmetrically . We have L ( x ) = − logOℓ ( x ) ( x ) = log ( exp ( g0 ( x ) ) + exp ( g1 ( x ) ) ) − gℓ ( x ) ( x ) , since Oj ( x ) = exp ( gj ( x ) ) exp ( g0 ( x ) ) +exp ( g1 ( x ) ) , j = 0 , 1 . So ∂L ∂αi = eg0 ( x ) ∂g0 ( x ) ∂αi + e g1 ( x ) ∂g1 ( x ) ∂αi eg0 ( x ) + eg1 ( x ) − ∂gℓ ( x ) ( x ) ∂αi = ( −1 ) 1−ℓ ( x ) Z ( ∂g0 ( x ) ∂αi − ∂g1 ( x ) ∂αi ) . Since gj ( x ) = fj ( x ) + bj , ∂gj ( x ) ∂αi = ∂fj ( x ) ∂αi , j = 0 , 1 . By ( 2 ) ∂f0 ( x ) ∂αi = 4 n2 ( p+ q ) 2 ∑ y 1 [ y ∼ x ] βi1 [ αity0 + α′it y 1 ≥ 0 ] t y 0 ∂f1 ( x ) ∂αi = 4 n2 ( p+ q ) 2 ∑ y 1 [ y ∼ x ] β′i1 [ αit y 0 + α ′ it y 1 ≥ 0 ] t y 0 . Therefore ∂L ∂αi = 4 n2 ( p+ q ) 2 ( −1 ) 1−ℓ ( x ) Z ( βi − β′i ) ∑ y 1 [ y ∼ x ] 1 [ αity0 + α′it y 1 ≥ 0 ] t y 0 . 2We abuse the notation L for L ( x ) and L ( G ) , but the meaning is clear from the context . Next we compute ∂L∂βi . Similar as above , ∂L ∂βi = ( −1 ) 1−ℓ ( x ) Z ( ∂f0 ( x ) ∂βi − ∂f1 ( x ) ∂βi ) . By ( 2 ) ∂f0 ( x ) ∂βi = 4 n2 ( p+ q ) 2 ∑ y 1 [ y ∼ x ] ϕ ( αity0 + α′it y 1 ) ∂f1 ( x ) ∂αi = 0 . So ∂L ∂βi = 4 n2 ( p+ q ) 2 ( −1 ) 1−ℓ ( x ) Z ∑ y 1 [ y ∼ x ] ϕ ( αity0 + α′it y 1 ) . Lastly , ∂L ∂b0 = ( −1 ) 1−ℓ ( x ) Z ( ∂g0 ( x ) ∂b0 − ∂g1 ( x ) ∂b0 ) = ( −1 ) 1−ℓ ( x ) Z , since ∂g0 ( x ) ∂b0 = 1 , ∂g1 ( x ) ∂b0 = 0 . In the following , we will use Theorem B.1 to analyze the dynamics of neurons of each type . As we can see , all of ∂L∂αi , ∂L ∂α′i , ∂L∂βi and ∂L ∂β′i have the form Y Z . In order to estimate these derivatives , we show the concentration of Y and Z respectively . To estimate the concentration of Z , we need the concentration of output obtained in Section 4 . For any ϵ > 0 , we require the probability of concentration in ( 3 ) to be at least 1− ϵ̃ , where ϵ̃ = o ( ϵ ) . In particular , if we choose ϵ̃ = ϵ2 , then we set 1−O ( 1n ) ≥ 1− ϵ 2 , i.e . n ≥ Ω ( 1 ϵ ) 2 . ( 5 ) Our following analysis will be based on this condition . Meanwhile in order to have balanced performance in each epoch of coordinate descent , we require∣∣E [ ∂L∂b0 ] ∣∣ < ϵ̃2 . Since E [ ∂L∂b0 ] = 12 ( −E [ Z|ℓ ( x ) = 0 ] + E [ Z|ℓ ( x ) = 1 ] ) ) , we have∣∣E [ Z|ℓ ( x ) = 0 ] − E [ Z|ℓ ( x ) = 1 ] ∣∣ < ϵ̃ . ( 6 ) We have the following relation between µ0 and µ1 . In the following , σ represents the sigmoid function : σ ( x ) = 11+e−x . Proposition B.2 . If ∣∣E [ Z|ℓ ( x ) = 0 ] −E [ Z|ℓ ( x ) = 1 ] ∣∣ < ϵ̃ , then |σ ( −µ0 ) −σ ( µ1 ) | ≤ σ′ ( µ0− δ ) δ+ σ′ ( µ1 + δ ) δ + 3ϵ̃ , where δ is as shown in Corollary 4.2 . Proof . We have Z = { σ ( −∆ ) , ℓ ( x ) = 0 σ ( ∆ ) , ℓ ( x ) = 1 For ℓ ( x ) = 0 , by Lagrange mean value theorem , |σ ( −µ0 ) −Z| = |σ ( −µ0 ) − σ ( −∆ ) | = σ′ ( ξ ) ( ∆− µ0 ) , where ξ is between −µ0 and −∆ . By Corollary 4.2 and the condition of n , |∆− µ0| ≤ δ with probability ≥ 1 − ϵ̃ . From the remark following Corollary A.4 , we have σ′ ( ξ ) ≤ σ′ ( −µ0 + δ ) = σ′ ( µ0 − δ ) ,3 with probability ≥ 1− ϵ̃ . Then we have P [ |σ ( −µ0 ) − Z| ≤ σ′ ( µ0 − δ ) δ|ℓ ( x ) = 0 ] ≥ 1− ϵ̃ . Since E [ Z|ℓ ( x ) = 0 ] = E [ Z||σ ( −µ0 ) − Z| ≤ σ′ ( µ0 − δ ) δ ] P [ |σ ( −µ0 ) − Z| ≤ σ′ ( µ0 − δ ) δ ] + E [ Z||σ ( −µ0 ) − Z| > σ′ ( µ0 − δ ) δ ] P [ |σ ( −µ0 ) − Z| > σ′ ( µ0 − δ ) δ ] , 3σ′ ( x ) is even . then ( note that 0 < Z < 1 ) E [ Z|ℓ ( x ) = 0 ] ≤ σ ( −µ0 ) + σ′ ( µ0 − δ ) δ + ϵ̃ and E [ Z|ℓ ( x ) = 0 ] ≥ ( σ ( −µ0 ) − σ′ ( µ0 − δ ) δ ) ( 1− ϵ̃ ) , i.e . E [ Z|ℓ ( x ) = 0 ] − σ ( −µ0 ) ≤ σ′ ( µ0 − δ ) δ + ϵ̃ and E [ Z|ℓ ( x ) = 0 ] − σ ( −µ0 ) ≥ −σ′ ( µ0 − δ ) δ − ϵ̃ ( σ ( −µ0 ) − σ′ ( µ0 − δ ) δ ) = −σ′ ( µ0 − δ ) δ ( 1− ϵ̃ ) − ϵ̃σ ( −µ0 ) ≥ −σ′ ( µ0 − δ ) δ − ϵ̃ . So ∣∣E [ Z|ℓ ( x ) = 0 ] − σ ( −µ0 ) ∣∣ ≤ σ′ ( µ0 − δ ) δ + ϵ̃ . Similarly ∣∣E [ Z|ℓ ( x ) = 1 ] − σ ( µ1 ) ∣∣ ≤ σ′ ( µ1 + δ ) δ + ϵ̃ . By triangle inequality , ∣∣σ ( −µ0 ) − σ ( µ1 ) ∣∣ ≤ ∣∣σ ( −µ0 ) − E [ Z|ℓ ( x ) = 0 ] ∣∣+ ∣∣E [ Z|ℓ ( x ) = 0 ] − E [ Z|ℓ ( x ) = 1 ] ∣∣ + ∣∣E [ Z|ℓ ( x ) = 1 ] − σ ( µ1 ) ∣∣ ≤ σ′ ( µ0 − δ ) δ + σ′ ( µ1 + δ ) δ + 3ϵ̃ . From the proof above , we can directly obtain the following corollary about Z. Corollary B.3 . P [ ∣∣Z − E [ Z|ℓ ( x ) = 0 ] ∣∣ ≤ 2σ′ ( µ0 − δ ) δ + ϵ̃|ℓ ( x ) = 0 ] ≥ 1− ϵ̃ P [ ∣∣Z − E [ Z|ℓ ( x ) = 1 ] ∣∣ ≤ 2σ′ ( µ1 + δ ) δ + ϵ̃|ℓ ( x ) = 1 ] ≥ 1− ϵ̃ . In order to obtain the concentration of Z , we need to estimate σ′ ( µ0 − δ ) δ and σ′ ( µ1 + δ ) δ . The following proposition is based on the condition that |µ0 + µ1| ≥ 4δ . If |µ0 + µ1| < 4δ , set c : = µ0 − µ1 , we have µ0 > c2 − 2δ and µ1 < − c 2 + 2δ . Then the concentration of output shown in Corollary 4.2 can guarantee 1− ϵ accuracy of the model for any ϵ > 0 . In fact , from |∆− µ0| < δ , we have ∆ > µ0 − δ > c2 − 3δ > 0 , due to δ = o ( c ) . So P [ ∆ > 0|ℓ ( x ) = 0 ] ≥ P [ |∆− µ0| < δ|ℓ ( x ) = 0 ] ≥ 1− ϵ̃ . Similarly , P [ ∆ < 0|ℓ ( x ) = 1 ] ≥ P [ |∆− µ0| < δ|ℓ ( x ) = 1 ] ≥ 1− ϵ̃ . Since ϵ̃ = o ( ϵ ) , the model achieves overall accuracy ≥ 1− ϵ . Proposition B.4 . If |µ0 + µ1| ≥ 4δ , then σ′ ( µ0 − δ ) δ = O ( ϵ̃ ) , σ′ ( µ1 + δ ) δ = O ( ϵ̃ ) . Proof . First , we estimate the lower bound of |σ ( −µ0 ) − σ ( µ1 ) | via the Fundamental Theorem of Calculus . We have |σ ( −µ0 ) − σ ( µ1 ) | = ∣∣ ∫ µ1 −µ0 σ ′ ( t ) dt ∣∣ . If −µ0 < µ1 < 0 , since µ0 + µ1 ≥ 4δ , we divide the interval [ −µ0 , µ1 ] into [ −µ0 , −µ0 + 2δ ] ∪ [ −µ0 + 2δ , µ1 − 2δ ] ∪ [ µ1 − 2δ , µ1 ] and estimate the lower bound of the integral . Since σ′ ( x ) is increasing on ( −∞ , 0 ] , we have∫ µ1 −µ0 σ′ ( t ) dt ≥ σ′ ( −µ0 ) · 2δ + I1 + σ′ ( µ1 − 2δ ) · 2δ , ( 7 ) where I1 = ∫ µ1−2δ −µ0+2δ σ ′ ( t ) dt . If µ1 < −µ0 < 0 , similarly we have∫ −µ0 µ1 σ′ ( t ) dt ≥ σ′ ( µ1 ) · 2δ + I2 + σ′ ( −µ0 − 2δ ) · 2δ , ( 8 ) where I2 = ∫ −µ0−2δ µ1+2δ σ′ ( t ) dt . We have a uniform lower bound from ( 7 ) and ( 8 ) : ∣∣∣∣ ∫ µ1 −µ0 σ′ ( t ) dt ∣∣∣∣ ≥ σ′ ( −µ0 − 2δ ) · 2δ + σ′ ( µ1 − 2δ ) · 2δ + I , ( 9 ) where I = min { I1 , I2 } . Furthermore , by Proposition B.2 , |σ ( −µ0 ) − σ ( µ1 ) | ≤ σ′ ( µ0 − δ ) δ + σ′ ( µ1 + δ ) δ + 3ϵ̃ . ( 10 ) Combine ( 9 ) and ( 10 ) : 2σ′ ( −µ0 − 2δ ) δ + 2σ′ ( µ1 − 2δ ) δ ≤ σ′ ( −µ0 + δ ) δ + σ′ ( µ1 + δ ) δ + 3ϵ̃ . ( 11 ) By Lagrange mean value theorem , σ′ ( −µ0 − 2δ ) = σ′ ( −µ0 + δ ) − 3σ′′ ( ξ0 ) δ σ′ ( µ1 − 2δ ) = σ′ ( µ1 + δ ) − 3σ′′ ( ξ1 ) δ , where ξ0 ∈ ( −µ0 − 2δ , −µ0 + δ ) , ξ1 ∈ ( µ1 − 2δ , µ1 + δ ) . Plug these into ( 11 ) : σ′ ( µ0 − δ ) δ + σ′ ( µ1 + δ ) δ − 6δ2 ( σ′′ ( ξ0 ) + σ′′ ( ξ1 ) ) ≤ 3ϵ̃ . Since δ2 ( σ′′ ( ξ0 ) + σ′′ ( ξ1 ) ) = o ( σ′ ( µ0 − δ ) δ ) and o ( σ′ ( µ1 + δ ) δ ) , we have σ′ ( µ0 − δ ) δ = O ( ϵ̃ ) σ′ ( µ1 + δ ) δ = O ( ϵ̃ ) . Combine Proposition B.4 and Corollary B.3 , we have the following concentration of Z . Proposition B.5 . P [ ∣∣Z − E [ Z|ℓ ( x ) = 0 ] ∣∣ ≤ O ( ϵ̃ ) |ℓ ( x ) = 0 ] ≥ 1− ϵ̃ P [ ∣∣Z − E [ Z|ℓ ( x ) = 1 ] ∣∣ ≤ O ( ϵ̃ ) |ℓ ( x ) = 1 ] ≥ 1− ϵ̃ . Under the condition of balanced performance , we have the following corollary about the concentration of Z independent of the label of x. Corollary B.6 . If ∣∣E [ ∂L∂b0 ] ∣∣ ≤ ϵ̃2 , then P [ ∣∣Z − E [ Z ] ∣∣ ≤ O ( ϵ̃ ) ] ≥ 1− ϵ̃ . Proof . Since E [ ∂L∂b0 ] = 1 2 ( E [ Z|ℓ ( x ) = 0 ] − E [ Z|ℓ ( x ) = 1 ] ) , we have∣∣E [ Z|ℓ ( x ) = 0 ] − E [ Z|ℓ ( x ) = 1 ] ∣∣ ≤ ϵ̃ . On the other hand , E [ Z ] = 1 2 ( E [ Z|ℓ ( x ) = 0 ] + E [ Z|ℓ ( x ) = 1 ] ) . So we have ∣∣E [ Z ] − E [ Z|ℓ ( x ) = 0 ] ∣∣ ≤ ϵ̃2 . By Proposition B.5 , P [ |Z − E [ Z ] | ≤ O ( ϵ̃ ) ] ≥ 1− ϵ̃ . Now we can derive the estimation of the derivatives . Theorem B.7 ( concentration of derivatives ) . For loss on the whole graph L = L ( G ) , with probability ≥ 1−O ( 1n ) , we have 4 1 . If αi > α′i > 0 or αi > 0 > α ′ i , |αiα′i | ≥ p q ( 1 + log − 13 n ) , then∣∣∣∣ ∂L∂αi + ( βi − β′i ) λ2 ( p− q p+ q ) 2 E [ Z ] ∣∣∣∣ ≤ |βi − β′i|E [ Z ] O ( log− 12 n ) ( 12 ) ∣∣∣∣ ∂L∂α′i − ( βi − β′i ) λ2 ( p− q p+ q ) 2 E [ Z ] ∣∣∣∣ ≤ |βi − β′i|E [ Z ] O ( log− 12 n ) ( 13 ) ∣∣∣∣ ∂L∂βi + ( αi − α′i ) λ2 ( p− q p+ q ) 2 E [ Z ] ∣∣∣∣ ≤ |αi − α′i|E [ Z ] O ( log− 12 n ) . ( 14 ) 4Since ∂L ∂β′i = − ∂L ∂βi ( see Theorem B.1 ) , we only need to estimate ∂L ∂βi . 2 . If αi > 0 > α′i , |αiα′i | ∈ [ q p ( 1 + γ ) , p q ( 1 − log − 13 n ) ] , where γ ∈ [ log− 1 3 n , ( pq ) 2 ( 1 − log− 1 3 n ) − 1 ] , then∣∣∣∣ ∂L∂αi + ( βi − β′i ) λp ( p− q ) 2 ( p+ q ) 2 E [ Z ] ∣∣∣∣ ≤ |βi − β′i|E [ Z ] O ( log− 12 n ) ( 15 ) ∣∣∣∣ ∂L∂α′i + ( βi − β′i ) λq ( p− q ) 2 ( p+ q ) 2 E [ Z ] ∣∣∣∣ ≤ |βi − β′i|E [ Z ] O ( log− 12 n ) ( 16 ) ∣∣∣∣ ∂L∂βi + λ ( p− q ) ( pαi + qα ′ i ) 2 ( p+ q ) 2 E [ Z ] ∣∣∣∣ ≤ |αi − α′i|E [ Z ] O ( log− 12 n ) . ( 17 ) 3 . If αi > 0 > α′i , |αiα′i | ∈ ( p q ( 1− log − 13 n ) , pq ( 1 + log − 13 n ) ) , then ∂L ∂βi ∈ [ − ( αi − α′i ) E [ Z ] ( λ ( p− q ) Λ1 2 ( p+ q ) 2 +O ( log− 1 2 n ) ) , − ( αi − α′i ) E [ Z ] ( λ ( p− q ) ( Λ3 − Λ2 ) 2 ( p+ q ) 2 −O ( log− 1 2 n ) ) ] , ( 18 ) where Λ1 = ( 1+log− 1 3 n ) p2−q2 ( 1+log− 1 3 n ) p+q , Λ2 = pq log− 1 3 n ( 1+log− 1 3 n ) p+q and Λ3 = ( 1−log− 1 3 n ) p2−q2 ( 1−log− 1 3 n ) p+q ; • if βi > β′i , ∂L ∂αi ∈ [ − ( βi − β′i ) E [ Z ] ( λp ( p− q ) 2 ( p+ q ) 2 +O ( log− 1 2 n ) ) , − ( βi − β′i ) E [ Z ] ( λ 2 ( p− q p+ q ) 2 −O ( log− 1 2 n ) ) ] ( 19 ) ∂L ∂α′i ∈ [ − ( βi − β′i ) E [ Z ] ( λq ( p− q ) 2 ( p+ q ) 2 +O ( log− 1 2 n ) ) , ( βi − β′i ) E [ Z ] ( λ 2 ( p− q p+ q ) 2 +O ( log− 1 2 n ) ) ] ( 20 ) ∂L ∂αi − ∂L ∂α′i ∈ [ − ( βi − β′i ) E [ Z ] ( λ ( p− q p+ q ) 2 +O ( log− 1 2 n ) ) , − ( βi − β′i ) E [ Z ] ( λ 2 ( p− q p+ q ) 2 −O ( log− 1 2 n ) ) ] , ( 21 ) • if βi ≤ β′i , ∂L ∂αi ∈ [ − ( βi − β′i ) E [ Z ] ( λ 2 ( p− q p+ q ) 2 −O ( log− 1 2 n ) ) , − ( βi − β′i ) E [ Z ] ( λp ( p− q ) 2 ( p+ q ) 2 +O ( log− 1 2 n ) ) ] ( 22 ) ∂L ∂α′i ∈ [ − ( βi − β′i ) E [ Z ] ( λ 2 ( p− q p+ q ) 2 +O ( log− 1 2 n ) ) , ( βi − β′i ) E [ Z ] ( λq ( p− q ) 2 ( p+ q ) 2 +O ( log− 1 2 n ) ) ] ( 23 ) ∂L ∂αi − ∂L ∂α′i ∈ [ − ( βi − β′i ) E [ Z ] ( λ 2 ( p− q p+ q ) 2 −O ( log− 1 2 n ) ) , − ( βi − β′i ) E [ Z ] ( λ ( p− q p+ q ) 2 +O ( log− 1 2 n ) ) ] ( 24 ) 4 . If αi > 0 > α′i , ∣∣αi α′i ∣∣ ≤ qp ( 1 + log− 13 n ) , βi < β′i , then ∂L ∂αi − ∂L ∂α′i ≥ −|βi − β′i|O ( ϵ̃ ) ( 25 ) ∂L ∂βi ≤ O ( ϵ̃ ) . ( 26 ) Proof . We show the proof for item 1 , other items can be proved similarly . Since L ( G ) is the average of the losses over revealed vertices , we first show the concentration of ∂L ( x ) ∂αi , then we show the concentration of ∂L ( G ) ∂αi using union bound . Since ∂L ( x ) ∂αi = ( −1 ) 1−ℓ ( x ) 4 ( βi − β′i ) Z ∑ y 1 [ y ∼ x ] 1 [ αity0 + α′it y 1 ≥ 0 ] t y 0 n2 ( p+ q ) 2 , we first show the concentration of Y : = ( −1 ) 1−ℓ ( x ) ∑ y∼x 41 [ αit y 0+α ′ it y 1≥0 ] t y 0 n2 ( p+q ) 2 using the method of averaged bounded difference . Similar as the proof of Theorem A.1 , let Yj = ( −1 ) 1−ℓ ( x ) 41 [ αit yj 0 +α ′ it yj 1 ≥0 ] t yj 0 n2 ( p+q ) 2 . Based on Condition ( Cond ) , for ℓ ( x ) = 0 , |Yj + 2λp n ( p+q ) 2 | ≤ O ( log− 7 2 n ) for ℓ ( yj ) = 0 . Similar results hold for ℓ ( yj ) = 1 , ℓ ( x ) = 1 . So for any ak , a′k , ∣∣∣∣E [ ∑ j Yj |Y1 , · · · , Yk−1 , Yk = ak ] − E [ ∑ j Yj |Y1 , · · · , Yk−1 , Yk = a′k ] ∣∣∣∣ ≤ ( αi − α′i ) O ( log− 72 n ) . By method of averaged bounded difference , for ℓ ( x ) = 0 , P [ ∣∣∣∣ ∑ ℓ ( yj ) =0 Yj + λ ( p p+ q ) 2∣∣∣∣ ≤ O ( log− 12 n ) ] ≥ 1− exp ( −2 log3 n ) ≥ 1− 1n2 . Similarly P [ ∣∣∣∣ ∑ ℓ ( yj ) =1 Yj + λ ( q p+ q ) 2∣∣∣∣ ≤ O ( log− 12 n ) ] ≥ 1− 1n2 . Hence P [ ∣∣∣∣Y + λ ( p2 + q2 ) ( p+ q ) 2 ∣∣∣∣ ≤ O ( log− 12 n ) ] ≥ 1− 1n2 . By Corollary B.6 , P [ |Z − E [ Z ] | ≤ O ( ϵ̃ ) ] ≥ 1− ϵ̃ , so we have P [ ∣∣∣∣∂L ( x ) ∂αi + ( βi − β′i ) λ p 2 + q2 ( p+ q ) 2 E [ Z ] ∣∣∣∣ ≤ |βi − β′i|E [ Z ] O ( log− 12 n ) |ℓ ( x ) = 0 ] ≥ 1−O ( 1n2 ) . For ℓ ( x ) = 1 , similarly we have P [ ∣∣∣∣∂L ( x ) ∂αi − ( βi − β′i ) λ 2pq ( p+ q ) 2E [ Z ] ∣∣∣∣ ≤ |βi − β′i|E [ Z ] O ( log− 12 n ) |ℓ ( x ) = 0 ] ≥ 1−O ( 1n2 ) . By union bound , we have ( 12 ) . ( 13 ) and ( 14 ) can be proved similarly . Using Theorem B.7 , we can analyze dynamics of neurons of each type . First , we introduce some notations . Let ηk denote the learning rate at the k-th epoch , Z ( k ) be the value of Z at the k-th epoch , α ( k ) i be the value of αi at the k-th epoch , similar for α ′ ( k ) i , β ( k ) i and β ′ ( k ) i . In particular , α ( 0 ) i , α ′ ( 0 ) i , β ( 0 ) i and β ′ ( 0 ) i represent the values at initialization . B.1 “ GOOD TYPE ” NEURONS In this section , we show that “ good type ” neurons stay in the “ good type ” regime throughout coordinate descent ( Theorem B.8 ) using Theorem B.7 . Theorem B.8 . “ Good type ” neurons are preserved in the “ good type ” throughout coordinate descent with probability ≥ 1−O ( 1n2 ) over the SBM randomness . Proof . As shown in Section 4 , “ good type ” regime is composed of ( G1 ) and ( G2 ) , we show the dynamics of neurons in ( G1 ) and ( G2 ) respectively . Assume that neuron ( α ( k ) i , α ′ ( k ) i , β ( k ) i , β ′ ( k ) i ) is in ( G1 ) , we show that it either stays in ( G1 ) or moves into ( G2 ) throughout coordinate descent . In fact , by ( 14 ) , with probability ≥ 1− O ( 1n2 ) , ∂L ∂β ( k ) i < 0 < ∂L ∂β ′ ( k ) i , so β ( k+1 ) i > β ( k ) i , β ′ ( k+1 ) i < β ′ ( k ) i and hence β ( k+1 ) i − β ′ ( k+1 ) i > β ( k ) i − β ′ ( k ) i > 0 . By ( 12 ) and ( 13 ) , ∂L ∂α ( k ) i < 0 < ∂L ∂α ′ ( k ) i , so α ( k+1 ) i > α ( k ) i , α ′ ( k+1 ) i < α ′ ( k ) i . If α ′ ( k+1 ) i > 0 , this neuron stays in ( G1 ) . If α ′ ( k+1 ) i < 0 , since∣∣∣∣ α ( k+1 ) i α ′ ( k+1 ) i ∣∣∣∣ = ∣∣∣∣∣ α ( k ) i − ηk ∂L∂α ( k ) i α ′ ( k ) i − ηk ∂L∂α′ ( k ) i ∣∣∣∣∣ > 1 , the neuron moves into ( G2 ) . Assume that neuron is in ( G2 ) , we also show that it either moves into ( G1 ) or stays in ( G2 ) . As shown in section 3.2 , ( G2 ) = ( G2,1 ) ∪ ( G2,2 ) ∪ ( G2,3 ) . If the neuron is in ( G2,1 ) , again by ( 12 ) , ( 13 ) and ( 14 ) , α ( k+1 ) i > α ( k ) i > 0 > α ′ ( k ) i > α ′ ( k+1 ) i , ∣∣ α ( k+1 ) i α ′ ( k+1 ) i ∣∣ > 1 , β ( k+1 ) i > β ( k ) i > β′ ( k ) i > β′ ( k+1 ) i , so the neuron stays in G2,2 . If the neuron is in G2,2 , by ( 15 ) , ( 16 ) and ( 17 ) , ∂L ∂β ( k ) i < 0 < ∂L ∂β ′ ( k ) i , so β ( k+1 ) i > β ′ ( k+1 ) i . Also , ∂L ∂α ( k ) i < ∂L ∂α ′ ( k ) i < 0 , so α ( k+1 ) i > α ′ ( k+1 ) i , ∣∣ α ( k+1 ) i α ′ ( k+1 ) i ∣∣ > ∣∣ α ( k ) i α ′ ( k ) i ∣∣ > 1 . If α′ ( k+1 ) i < 0 , the neuron stays in G2 . If α ′ ( k+1 ) i > 0 , it moves into G1 . If the neuron is in G2,3 , by ( 18 ) and ( 21 ) , ∂L ∂β ( k ) i < 0 < ∂L ∂β ′ ( k ) i , ∂L ∂α ( k ) i − ∂L ∂α ′ ( k ) i < 0 , so β ( k+1 ) i > β ( k ) i > β ′ ( k ) i > β ′ ( k+1 ) i , α ( k+1 ) i − α ′ ( k+1 ) i > α ( k ) i − α ′ ( k ) i > 0 . By ( 19 ) and ( 20 ) , ∂L ∂αi ≤ − ( βi − β′i ) E [ Z ] ( λ 2 ( p− q p+ q ) 2 −O ( log− 1 2 n ) ) , ∂L ∂α′i ≤ ( βi − β′i ) E [ Z ] ( λ 2 ( p− q p+ q ) 2 +O ( log− 1 2 n ) ) . Similar as in ( G2,2 ) , if α ′ ( k+1 ) i < 0 , the neuron stays in ( G2 ) . If α ′ ( k+1 ) i > 0 , it moves into ( G1 ) . B.2 “ BAD TYPE ” NEURONS As shown in Section 4 , neurons of “ bad type ” consist of two cases : B1 and B2 , where B2 = B2,1∪ B2,2 ∪ B2,3 ∪ B3 . Since the output in B3 is concentrated at 0 ( see Theorem A.2 ) , we don ’ t need to worry if neurons move into this region . Neurons in B1 ∪ B2,1 ∪ B2,2 ∪ B2,3 might exit “ bad type ” regime and become “ harmless ” or “ good ” ( if the neuron becomes order-aligned ) , which will do no harm to the performance of the model . If they stay in B1 ∪ B2,1 ∪ B2,2 ∪ B2,3 , the following theorem shows that the separation mi0 −mi1 can be upper bounded by initialization . In fact , Theorem A.4 shows that mi0 −mi1 is proportional to |αi − α′i||βi − β′i| . The next theorem shows that both |αi−α′i| and |βi−β′i| shrink throughout coordinate descent . The worst situation is that the magnitude of |αi − α′i| and |βi − β′i| of neurons in B3 increase and move into B1 or B2 at certain epoch . From Theorem B.7 we see that the magnitude can only increase by a limited rate ( we can see this more explicitly in Theorem 6.2 ) . Theorem B.9 . If ( α ( k ) i , α ′ ( k ) i , β ( k ) i , β ′ ( k ) i ) is in B1 ∪ B2,1 ∪ B2,2 ∪ B2,3 then with probability ≥ 1−O ( 1n2 ) over the SBM randomness , ∣∣α ( k+1 ) i −α′ ( k+1 ) i ∣∣ ≤ ∣∣α ( k ) i −α′ ( k ) i ∣∣ , ∣∣β ( k+1 ) i −β′ ( k+1 ) i ∣∣ ≤∣∣β ( k ) i − β′ ( k ) i ∣∣ . Proof . In B1 and B2,1 , by ( 12 ) and ( 13 ) , ∂L ∂α ( k ) i > 0 > ∂L ∂α ′ ( k ) i , then α ( k+1 ) i < α ( k ) i , α ′ ( k+1 ) i > α ′ ( k ) i , so |α ( k+1 ) i − α ′ ( k+1 ) i | ≤ |α ( k ) i − α ′ ( k ) i | . Similarly , by ( 14 ) , ∂L∂β ( k ) i = − ∂L ∂β ′ ( k ) i < 0 , so |β ( k+1 ) i − β ′ ( k+1 ) i | ≤ |β ( k ) i − β ′ ( k ) i | ( Note that α ( k ) i > α ′ ( k ) i , β ( k ) i < β ′ ( k ) i ) . In B2,2 , from ( 15 ) and ( 16 ) , we have ∂L ∂α ( k ) i > ∂L ∂α ′ ( k ) i > 0 , so |α ( k+1 ) i − α ′ ( k+1 ) i | ≤ |α ( k ) i − α ′ ( k ) i | . On the other hand , ∂L ∂β ( k ) i < 0 < ∂L ∂β ′ ( k ) i , so β ( k+1 ) i > β ( k ) i , β ′ ( k+1 ) i < β ′ ( k ) i and |β ( k+1 ) i − β ′ ( k+1 ) i | ≤ |β ( k ) i − β ′ ( k ) i | . In B2,3 , by ( 24 ) , ∂L∂α ( k ) i − ∂L ∂α ′ ( k ) i > 0 , so |α ( k+1 ) i −α ′ ( k+1 ) i | ≤ |α ( k ) i −α ′ ( k ) i | . By ( 18 ) , ∂L ∂β ( k ) i < 0 < ∂L ∂β ′ ( k ) i , so |β ( k+1 ) i − β ′ ( k+1 ) i | ≤ |β ( k ) i − β ′ ( k ) i | . B.3 “ HARMLESS TYPE ” NEURONS Section 4 shows that there are two cases of “ harmless type ” : H1 and H2 . For neurons in H1 , the derivatives of parameters are estimated in ( 15 ) , ( 16 ) and ( 17 ) ( same as in G2,2 ) . We can have similar analysis as in G2,2 and show that the inequality αi > 0 > α′i , βi > β ′ i can be preserved . Moreover∣∣αi α′i ∣∣ increases . So the neurons either stay in H1 or become “ good type ” if ∣∣αiα′i ∣∣ > 1 . In particular , neurons in H1 do no harm to the performance of the model . For neurons in H2 , 1 [ αit y 0 + α ′ it y 1 ≥ 0 ] = 0 , so the derivatives are all equal to 0 . Therefore they are never updated . Meanwhile they don ’ t affect the performance of the model since ϕ ( αit y 0 + α ′ it y 1 ) = 0 and ∆i = 0 . C LEARNING GUARANTEE In this section , we prove Theorem 6.1 , 6.2 and Lemma 6.3 . Proof of Theorem 6.1 . We prove by contradiction . Suppose P [ ∆ < 0|ℓ ( x ) = 0 ] ≥ 4ϵ , ( 27 ) then E [ Z|ℓ ( x ) = 0 ] = E [ Z|ℓ ( x ) = 0 , ∆ < 0 ] P [ ∆ < 0|ℓ ( x ) = 0 ] + E [ Z|ℓ ( x ) = 0 , ∆ ≥ 0 ] P [ ∆ ≥ 0|ℓ ( x ) = 0 ] ≥ 1 2 · 4ϵ = 2ϵ . ( 28 ) Furthermore , we claim that µ0 < δ . In fact , if µ0 ≥ δ , since P [ |∆− µ0| ≤ δ|ℓ ( x ) = 0 ] ≥ 1− ϵ by Corollary 4.2 , and ∆ ≥ µ0 − δ ≥ 0 , we have P [ ∆ ≥ 0|ℓ ( x ) = 0 ] ≥ P [ |∆− µ0| ≤ δ|ℓ ( x ) = 0 ] ≥ 1− ϵ , i.e . P [ ∆ < 0|ℓ ( x ) = 0 ] ≤ ϵ , which contradicts ( 27 ) . Let c : = µ0 − µ1 , then µ1 = µ0 − c < δ − c. Again , by Corollary 4.2 , for ℓ ( x ) = 1 , ∆ < µ1 + δ with probability ≥ 1− ϵ , we have Z = σ ( ∆ ) < σ ( µ1 + δ ) < σ ( −c+ 2δ ) . Then E [ Z|ℓ ( x ) = 1 ] = E [ Z|ℓ ( x ) = 1 , |∆− µ1| < δ ] P [ |∆− µ1| < δ|ℓ ( x ) = 1 ] + E [ Z|ℓ ( x ) = 1 , |∆− µ1| ≥ δ ] P [ |∆− µ1| ≥ δ|ℓ ( x ) = 1 ] < σ ( −c+ 2δ ) · 1 + 1 · ϵ < σ ( − c 2 ) + ϵ . The last step is due to δ = o ( c ) . Since σ ( − c2 ) < ϵ 2 , E [ Z|ℓ ( x ) = 1 ] < 3ϵ 2 . Combine with ( 28 ) , ∣∣E [ Z|ℓ ( x ) = 0 ] − E [ Z|ℓ ( x ) = 1 ] ∣∣ > ϵ 2 . On the other hand , ∣∣E [ ∂L∂b0 ] ∣∣ < ϵ4 implies∣∣E [ Z|ℓ ( x ) = 0 ] − E [ Z|ℓ ( x ) = 1 ] ∣∣ < ϵ 2 , which is a contradiction . So P [ ∆ < 0|ℓ ( x ) = 0 ] < 4ϵ . Similarly , P [ ∆ > 0|ℓ ( x ) = 1 ] < 4ϵ . Proof of Theorem 6.2 . If the i-th neuron is of “ good type ” , from Corollary A.4 , we find a uniform lower bound of mi0 −mi1 in “ good type ” regimes . We have min { p−q 2 , Λ3 } = p−q 2 . Next we estimate αi − α′i and βi − β′i . Let A ( k ) i : = α ( k ) i − α ′ ( k ) i , B ( k ) i : = β ( k ) i − β ′ ( k ) i . We have A ( k ) i = α ( k ) i − α ′ ( k ) i = α ( k−1 ) i − α ′ ( k−1 ) i − ηk ( ∂L ∂α ( k−1 ) i − ∂L ∂α ′ ( k−1 ) i ) B ( k ) i = β ( k ) i − β ′ ( k ) i = β ( k−1 ) i − β ′ ( k−1 ) i − ηk ( ∂L ∂β ( k−1 ) i − ∂L ∂β ′ ( k−1 ) i ) , By Theorem B.7 , in G1 and G2,1 , with probability ≥ 1−O ( 1n ) , ∂L ∂αi − ∂L ∂α′i ≤ − ( βi − β′i ) E [ Z ] ( ( p− q p+ q ) 2 λ−O ( log− 1 2 n ) ) ) ≤ − ( βi − β′i ) E [ Z ] λ 2 ( p− q p+ q ) 2 ∂L ∂βi − ∂L ∂β′i ≤ − ( αi − α′i ) E [ Z ] ( ( p− q p+ q ) 2 λ−O ( log− 1 2 n ) ) ≤ − ( αi − α′i ) E [ Z ] λ 2 ( p− q p+ q ) 2 so A ( k ) i = A ( k−1 ) i − ηk ( ∂L ∂α ( k−1 ) i − ∂L ∂α ′ ( k−1 ) i ) ≥ A ( k−1 ) i + ηkE [ Z ( k ) ] λ 2 ( p− q p+ q ) 2 B ( k−1 ) i = A ( k−1 ) i + λ 2 ( p− q p+ q ) 2 B ( k−1 ) i B ( k ) i = B ( k−1 ) i − ηk ( ∂L ∂β ( k−1 ) i − ∂L ∂β ′ ( k−1 ) i ) ≥ B ( k−1 ) i + ηkE [ Z ( k ) ] λ 2 ( p− q p+ q ) 2 A ( k−1 ) i = B ( k−1 ) i + λ 2 ( p− q p+ q ) 2 A ( k−1 ) i . In matrix form : ( A ( k ) i B ( k ) i ) ⪰ ( 1 λ2 ( p−q p+q ) 2 λ 2 ( p−q p+q ) 2 1 ) ( A ( k−1 ) i B ( k−1 ) i ) ( 29 ) Similarly , in G2,2 : ( A ( k ) i B ( k ) i ) ⪰ ( 1 λ4 ( p−q p+q ) 2 λ 8 ( p−q p+q ) 2 1 ) ( A ( k−1 ) i B ( k−1 ) i ) ( 30 ) in G2,3 : ( A ( k ) i B ( k ) i ) ⪰ ( 1 λ4 ( p−q p+q ) 2 λ ( Λ3−Λ2 ) 2 ( p−q ) ( p−q p+q ) 2 1 ) ( A ( k−1 ) i B ( k−1 ) i ) ( 31 ) where Λ2 = pq log − 1 3 n ( 1+log− 1 3 n ) p+q , Λ3 = ( 1−log− 1 3 n ) p2−q2 ( 1−log− 1 3 n ) p+q . A uniform relation among ( 29 ) , ( 30 ) and ( 31 ) can be given by ( 30 ) . By eigenvalue decomposition , we have A ( k ) i B ( k ) i ≥ 1 4 ( 1 + √ 2λ 8 ( p− q p+ q ) 2 ) 2k ( 2− 1 8A ( 0 ) i + 2 1 8B ( 0 ) i ) 2 ≥ A ( 0 ) i B ( 0 ) i ( 1 + √ 2λ 8 ( p− q p+ q ) 2 ) 2k . Therefore we have a uniform lower bound of mi0 −mi1 at the k-th epoch in “ good type ” regime : mi0 −mi1 ≥ A ( 0 ) i B ( 0 ) i λ 2 ( p− q p+ q ) 2 ( 1 + √ 2λ 8 ( p− q p+ q ) 2 ) 2k . Next we consider the “ bad type ” regime . By Corollary A.4 , we have lower bound of mi0 −mi1 in B1 , B2 and B3 respectively . By Theorem B.9 , in B1 and B2 , |αi − α′i| and |βi − β′i| shrink . Moreover , since Λ1 > Λ35 , we have a uniform lower bound of mi0 −mi1 in B1 and B2 : mi0 −mi1 ≥ − ∣∣A ( 0 ) i B ( 0 ) i ∣∣λ ( p− q ) Λ1 ( p+ q ) 2 ≥ −∣∣A ( 0 ) i B ( 0 ) i ∣∣λp ( p− q ) ( p+ q ) 2 , since Λ1 = ( 1+log− 1 3 n ) p2−q2 ( 1+log− 1 3 n ) p+q ≤ 2p 2−q2 2p+q ≤ p. Next we show that |αi − α′i| and |βi − β′i| can only increase by a limited rate in B3 . From item 4 of Theorem B.7 , we have ∂L ∂αi − ∂L ∂α′i ≥ −|βi − β′i|O ( ϵ̃ ) ∂L ∂βi − ∂L ∂β′i = 2 ∂L ∂βi ≤ O ( ϵ̃ ) . Therefore ( note that βi < β′i ) A ( k ) i ≤ A ( k−1 ) i + ηk|B ( k−1 ) i |O ( ϵ̃ ) |B ( k ) i | ≤ |B ( k−1 ) i |+ ηkO ( ϵ̃ ) . Since E [ Z ( k ) ] ≥ Ω ( ϵ ) 6 , and ϵ̃ = o ( ϵ ) , ϵ2 = O ( 1n ) , so ηkO ( ϵ̃ ) ≤ ηkE [ Z ( k ) ] 1n = 1 n . Suppose A ( k ) i ≥ O ( 1 ) , otherwise , A ( k ) i and |B ( k ) i | increase by an even smaller rate . So we have ( A ( k ) i |B ( k ) i | ) ⪯ ( 1 1n 1 n 1 ) ( A ( k−1 ) i |B ( k−1 ) i | ) . By eigenvalue decomposition , we have |A ( k ) i B ( k ) i | ≤ 1 2 ( 1 + 1 n ) 2k ( |A ( 0 ) i |+ |B ( 0 ) i | ) 2 ≤ k ( ( A ( 0 ) i ) 2 + ( B ( 0 ) i ) 2 ) ( n → ∞ ) . We obtained the result for “ harmless type ” neurons directly from Corollary 4.3 . Proof of Lemma 6.3 . Since all parameters are independent standard normal random variables , we have E [ ( α− α′ ) 2 ] = E [ ( β − β′ ) 2 ] = 2 , Var [ ( α− α′ ) 2 ] = Var [ ( β − β′ ) 2 ] = 8 . By Chebyshev ’ s inequality we have P [ h∑ i=1 ( αi − α′i ) 2 + ( βi − β′i ) 2 ≤ 5h ] ≥ 1−O ( 1 h ) . 5 xp2−q2 xp+q is monotonically increasing . 6Otherwise , the model already achieves high accuracy , see the proof of Theorem 2.2 For neurons initialized as “ good type ” , we have E [ α− α′|α > α′ , α+ α′ > 0 ] = 2√ π Var [ α− α′|α > α′ , α+ α′ > 0 ] = 2− 1 4π E [ β − β′|β > β′ ] = 1√ π Var [ β − β′|β > β′ ] = 2− 1 π . Let ρ denote the probability that a neuron is initialized as “ good type ” . By G1 , G2 and symmetry , ρ = 2P [ α > α′ , α+ α′ > 0 , β > β′ ] . Since P [ α > α′ , α+ α′ ] = 1 4 , P [ β > β′ ] = 1 2 , we have ρ = 14 . By Chernoff bound , P [ hg ≥ ρ 2h ] ≥ 1−exp ( − ρ2 4 h ) , so P [ hg ≥ h 8 ] ≥ 1−exp ( − h 64 ) . Also by Chebyshev ’ s inequality , P [ ∑ the i-th neuron initialized as “ good type ” |αi − α′i||βi − β′i| ≥ hg ( 1 2π − k ) ∣∣hg ≥ h 8 ] ≥ 1− 4− 14π2 hgk2 . Set k = 110π , P [ ∑ the i-th neuron initialized as “ good type ” |αi − α′i||βi − β′i| ≥ h 80 ∣∣hg ≥ h 8 ] ≥ 1−O ( 1 h ) . So we have P [ ∑ the i-th neuron initialized as “ good type ” |αi − α′i||βi − β′i| ≥ h 80 ] ≥ P [ ∑ the i-th neuron initialized as “ good type ” |αi − α′i||βi − β′i| ≥ h 80 ∣∣hg ≥ h 8 ] P [ hg ≥ h 8 ] ≥ ( 1−O ( 1 h ) ) ( 1− exp ( − h 64 ) ) ≥ 1−O ( 1 h ) . D EXPERIMENTS ON DYNAMICS OF HIDDEN NEURONS This experiment verifies our argument in Sections B.1 , B.2 , B.3 and Theorem 6.2 about the dynamics of hidden neurons . We set h = 5 , λ = 0.3 and train the model on graphs sampled from SBM with n = 1000 , a = 1.0 , b = 0.7 . The plot of accuracy and its distribution can be seen in Section 7 . Here we plot the dynamics of all the 5 hidden neurons in Figure 4 , with each row corresponding to one hidden neuron . In each plot , x-axis represents epoch and y-axis represents the value of neurons . The first column depicts αi and α′i , the second column |αiα′i | , the third column |αi −α ′ i| , the fourth column βi , β ′ i and the last column |βi − β′i| . As shown in the figure , the first , second and fourth neurons are of “ good '' type satisfying ( G2 ) . Throughout training these neurons are preserved as “ good '' type : they ’ re order-aligned , |αiα′i | is lowered bounded by 1 , and both |αi − α ′ | , |βi − β ′ i| keeps increasing . All of these verify our argument in B.1 . The third neuron is “ harmless '' satisfying ( H2 ) . As shown in B.3 , this neuron isn ’ t updated and doesn ’ t make contribution to the output . The fifth neuron is of “ bad '' type satisfying ( B2 ) . Although |αi −α′i| and |βi − β′i| increase , but by comparing with the first , second and fourth row ( “ good '' neurons ) , they increase at a much smaller . This verifies our result in Theorem 6.2 . E TABLE OF NOTATIONS We list the notations used in this paper for readers ’ convenience . Notation Definition n number of vertices in a graph p probability of intra-community connection q probability of cross-community connection a parameter for p with p = a log 3 n n b parameter for q with q = b log 3 n n λ probability of revealing the label of a vertex ℓ ( x ) label of vertex x A adjacency matrix of a graph  normalized adjacency matrix with self loop  = 2n ( p+q ) ( A+ I ) X input feature of a graph W ( 0 ) trainable weights in the first layer of GCN W ( 1 ) trainable weights in the second layer of GCN B bias matrix of GCN with each row of B being [ b0 , b1 ] b0 bias in the first component b1 bias in the second component h number of hidden features f0 logit in the first component without bias f1 logit in the second component without bias g0 logit in the first component , g0 = f0 + b0 g1 logit in the second component , g1 = f1 + b1 ∆ difference between logit , ∆ = g0 − g1 = f0 − f1 + b0 − b1 | The authors provide guarantees (in term of the probability of correctly labelling any given vertex) regarding the learning of communities in the stochastic block model. The proof technique relies on the analysis of three types of neurons : the so-called "good" neurons which contribute positively to the correct labelling, the so-called "harmless" neurons, whose effect on the classification can be neglected and the "bad" neurons which contribute negatively to the classification. The authors show that when initializing the paramters from a standard normal distribution, the number of neurons of the good type is sufficiently large to guarantee exact recovery of the missing label with high probabilty. | SP:81873659b4e2f8f74ba812fd6444493ec1aea934 |
LEARNING GUARANTEES FOR GRAPH CONVOLUTIONAL NETWORKS ON THE STOCHASTIC BLOCK MODEL | 1 INTRODUCTION There is presently a large gap between what can be accomplished in practice using deep learning , and what can be satisfactorily explained and predicted by the theory of deep learning . Nevertheless , the past several years have seen substantial developments in the theory of deep learning ( Ge et al. , 2017 ; Brutzkus & Globerson , 2017 ; Zhang et al. , 2019a ; Goel et al. , 2020 ; Chen et al. , 2020a ) . One factor contributing to the gap between the theory and practice of traditional NNs is that realworld data sets tend to have complex structure that is difficult to capture with formal definitions . For example , popular image classification models are capable of memorizing arbitrary data ( Zhang et al. , 2016 ) , and yet they exhibit astonishing generalization performance on accurately-labeled natural images . Hence , any rigorous proof of the observed generalization performance of deep learning models on image classification tasks will necessarily require assumptions about the data that are sharp enough to separate random inputs from natural images . Because of the difficulty of giving an adequate characterization of real-world data , much of the recent progress in deep learning theory has instead focused on proving results using very simple ( e.g . Gaussian ) input distributions or in distribution-free settings ( Ge et al. , 2017 ; Brutzkus & Globerson , 2017 ; Zhang et al. , 2019a ; Vempala & Wilmes , 2019 ) . Compared to traditional feed-forward ( dense , convolutional , etc . ) NNs , the theory of graph neural networks ( GNNs ) is still in its infancy . On the other hand , it appears substantially easier to give plausible descriptions of the combinatorial structure of real-world graph data sets than , e.g. , to characterize the distribution of natural images ( Drobyshevskiy & Turdakov , 2019 ) . We therefore believe that GNNs offer a natural setting for developing provable guarantees that are able to capture the power of deep learning on real-world datasets . In this paper , we contribute to that goal by giving the first rigorous guarantees of efficient semi-supervised learning of stochastic block models via a GNN . 1.1 GRAPH NEURAL NETWORKS Many natural datasets for diverse machine learning problems have a graph structure , including social networks , molecular structures , and transit networks . In order to efficiently exploit such combinatorial structure , a variety of GNN models have been proposed , tuned for different kinds of tasks . A number of taxonomies of GNN models have been proposed ( Zhou et al. , 2018 ; Wu et al. , 2021 ) ; one of the most essential differences between different GNN models is whether they are meant to label the graph as a whole , or to label individual components of the graph , particularly vertices . From a theoretical perspective , the best understood tasks for GNNs concern labeling the graph as a whole , for example for the task of classifying a graph by its isomorphism type ( Sato , 2020 ) . In particular , it has been established that many GNN models are of comparable power to various versions of the Weisfeiler-Leman hierarchy1 ( Xu et al. , 2018 ; Morris et al. , 2019 ) . Some progress has also been made on the theory of GNNs for vertex-labeling tasks . Recent works by Sato et al . describe the representational power of certain GNN models for tasks such as computing minimum vertex covers ( Sato et al. , 2019 ) . Garg et al . also give bounds on the representational power of GNN models , as well as using Rademacher bounds to estimate the generalization ability of GNNs ( Garg et al. , 2020 ) . Our results concern the task of semi-supervised community detection . In this problem , each vertex belongs to one community , and some subset of the vertices are labeled according to their community membership . The task is to classify the community membership of the remaining vertices . This task has been one of the most intensively studied problems in the GNN literature , but there have not yet been any provable guarantees on the performance of proposed models . We study ( spatial-based ) graph convolutional models similar to the GCN model proposed in Kipf & Welling ( 2017 ) . A single layer of such a model computes weights at each node by aggregating the weights at neighboring nodes and applying an activation function with learned parameters , e.g. , a linear map followed by a ReLU . Many variations on this theme , including various sophisticated training regimes , have been proposed ( Chen et al. , 2017 ; Gao et al. , 2018 ; Li et al. , 2018 ; Zhang et al. , 2019b ; Chen et al. , 2018 ) , but no provable guarantees have been available for the performance of such models on natural data distributions , until the present work . 2 MAIN RESULTS One motivation for GNNs as a target for progress in deep learning theory is that there are well-studied graph distributions that plausibly capture some of the structure of real-world data ( Drobyshevskiy & Turdakov , 2019 ) . For example , even fairly simple preferential attachment models plausibly capture some of the essential structure of the web ( Kumar et al. , 2000 ) . Other graph models naturally capture community structures , the simplest of which is the Stochastic Block Model ( SBM ) ( Holland et al. , 1983 ) . A graph is sampled from a SBM by first partitioning vertices into communities ( with fixed or random sizes ) . Two vertices are connected with probability p if they belong to the same community and probability q if they belong to different communities . In this paper , we consider the case of an SBM with two equal-sized communities in which vertices have label 0 and 1 respectively . We denote the label of vertex x by ℓ ( x ) ∈ { 0 , 1 } . The graphs are parameterized as SBM ( n , p , q ) where n is the number of vertices , p is the probability of an intra-community connection , and q is the probability of a cross-community connection . We allow n to vary ( but will require it to be sufficiently large ) , while p and q are of the form p = a log 3 n n and q = b log 3 n n for some fixed constants a > b . In the semi-supervised setting , the community labels of some portion of the labels are revealed . We assume the label of each vertex is revealed independently with probability λ . The input-layer features at a vertex x is ( 0 , 0 ) if its label is not revealed , ( 1 , 0 ) if its label is revealed to be 0 , and ( 0 , 1 ) if its label is revealed to be 1 . Assumption 2.1 ( Sparse Stochastic Block Model ) . The probabilities of intra and cross-community connections are p = a log 3 n n and q = b log3 n n , where a > b are constants . We study the problem of recovering the communities from such graphs using GNN models . Of course , recovering the communities of an SBM graph has been well-studied and its computational complexity is fully understood in most cases ( Abbe & Sandon , 2015 ; Kawamoto et al. , 2019 ) . SBM models are therefore a natural test-case for understanding the power of GNN models for learning 1Weisfeiler-Leman hierarchy is a polynomial-time iterative algorithms which provides a necessary but insufficient condition for graph isomorphism . community structure , and experimental studies have been done in this setting ( Chen et al. , 2020b ; Yadav et al. , 2019 ) . ( Abbe et al. , 2014 ) shows a sharp threshold in the task of community recovery : ( √ p −√q ) √ n logn > √ 2 . This threshold clearly holds for our case ( at sufficiently large values of n ) , since p = a log 3 n n , q = b log3 n n and a > b . The contribution here is not to learn the community models . Rather it ’ s showing that ( multi-layer ) GCNs solve the classification problem , which is very much not trivial ( it is non-convex , and the training loss curve is empirically non-monotonic ) . Our GNN models will be trained on a graph or several graphs generated by the SBM ( n , p , q ) model , and seek to understand their accuracy on arbitrary SBM ( n , p , q ) graphs not necessarily in the training set but with the same parameters a , b determining p and q ( with n allowed to vary ) . In particular , we study spatial-based graph convolutional models along the lines of the Graph Convolutional Networks ( GCN ) introduced in ( Kipf & Welling , 2017 ) . Each layer of the model computes a feature vector at every vertex of an input graph based on features of nearby vertices in the previous layer . A typical layer-wise update rule is of the form X ( k+1 ) = ϕ ( ÂX ( k ) W ( k ) ) , where •  is a suitably-normalized adjacency matrix of shape n × n where n is the number of vertices . Usually  includes self-loops . • X ( k ) gives the feature vector in the k-th layer at each vertex as a matrix of shape n×mk , where mk is the number of features in layer k. • ϕ is an activation function , such as the ReLU . • W ( k ) are the trainable weights in the k-th layer , a matrix of shape mk ×mk+1 . In our version of this model , we define  = 1n 2 ( p+q ) à , where à = A+ I , A is the adjacency matrix of a given graph , and I is the identity matrix . For the given SBM ( n , p , q ) , a randomly selected vertex has n2 ( p + q ) neighbors in expectation , so  is obtained by normalizing each row of A + I with the average size of a neighborhood . Since very deep GCN models seem to provide little empirical benefit ( Li et al. , 2018 ) , we use a single hidden layer with a softmax output layer . Furthermore , we introduce a bias term b at the second layer . So the model has the following form : f ( X , A ) = softmax ( Âϕ ( ÂXW ( 0 ) ) W ( 1 ) +B ) = softmax ( 4 n2 ( p+ q ) 2 Ãϕ ( ÃXW ( 0 ) ) W ( 1 ) +B ) , ( 1 ) where X is the input feature of the graph and W ( 0 ) , W ( 1 ) and B are trainable parameters . Let h denote the number of hidden features , which equals the number of columns of W ( 0 ) and the number of rows of W ( 1 ) . We define the accuracy of the model as the probability of predicting correctly the label of a single vertex in a randomly generated SBM ( n , p , q ) graph where the label of each vertex is revealed with probability λ . We can now state our main result . Theorem 2.2 . For any ϵ > 0 and δ > 0 , given a GCN model with 1δ ≤ h ≤ n hidden features and with parameters initialized independently from N ( 0 , 1 ) , if training graphs are sampled from SBM ( n , p , q ) with n ≥ max ( Ω ( 1ϵ ) 2 , Ω ( 1δ ) ) and the label of each vertex revealed with probability λ , and if the model is trained by coordinate descent for k = O ( log log 1ϵ ) epochs , then with probability ≥ 1− δ , the model achieves accuracy ≥ 1− 4ϵ . Remark . We treat λ as constants , so it is omitted in the big O and Ω notation in the sampling and training complexity . We emphasize that the novelty of this theorem is not in learning two-class SBM models as such ; this is a long-solved problem . Instead , this is the first proof of efficient learning for a GCN on semi-supervised community detection tasks using a natural family of random graph models . 3 PRELIMINARIES In this section , we first introduce notations ( a table of notations is also shown in the appendix for readers ’ convenience ) and some interpretations . Then we introduce the structure of the paper . Given a vertex y , denote the row of ÃX corresponding to y as ( ty0 , t y 1 ) , so t y 0 and t y 1 give the numbers of neighbors of y ( including perhaps y itself ) with revealed labels in class 0 and class 1 respectively . Let W ( 0 ) = ( α1 α2 · · · αh α′1 α ′ 2 · · · α′h ) , W ( 1 ) = β1 β ′ 1 β2 β ′ 2 ... ... βh β ′ h , B = b0 b1 b0 b1 ... ... b0 b1 . Then αit y 0 + α ′ it y 1 , 1 ≤ i ≤ h gives h features of vertex y in the hidden layer . The inner product of the yth row of ϕ ( ÃXW ( 0 ) ) and the columns of W ( 1 ) gives weighted sums of features of y : ∑h i=1 βiϕ ( αit y 0 + α ′ it y 1 ) and ∑h i=1 β ′ iϕ ( αit y 0 + α ′ it y 1 ) , where ϕ represents the ReLU function . Given a vertex x , the row of Âϕ ( ÂXW ( 0 ) ) W ( 1 ) corresponding to x is denoted by ( f0 ( x ) , f1 ( x ) ) and is of the form ( 4 n2 ( p+ q ) 2 ∑ y∈G 1 [ y ∼ x ] h∑ i=1 βiϕ ( αit y 0+α ′ it y 1 ) , 4 n2 ( p+ q ) 2 ∑ y∈G 1 [ y ∼ x ] h∑ i=1 β′iϕ ( αit y 0+α ′ it y 1 ) ) , ( 2 ) where 1 [ y ∼ x ] is equal to 1 if y and x are connected , 0 otherwise . Denote f i0 ( x ) : = 4βi n2 ( p+ q ) 2 ∑ y∈G 1 [ y ∼ x ] ϕ ( αity0+α′it y 1 ) f i 1 ( x ) : = 4β′i n2 ( p+ q ) 2 ∑ y∈G 1 [ y ∼ x ] ϕ ( αity0+α′it y 1 ) , so f0 ( x ) = ∑h i=1 f i 0 ( x ) and f1 ( x ) = ∑h i=1 f i 1 ( x ) . Denote gj ( x ) : = fj ( x ) + bj , j = 0 , 1. , where ( g0 ( x ) , g1 ( x ) ) represents the logit of the model corresponding to x. Denote ∆ ( x ) : = g0 ( x ) − g1 ( x ) . In order to make correct predictions , we need ∆ ( x ) > 0 when ℓ ( x ) = 0 and ∆ ( x ) < 0 when ℓ ( x ) = 1 . The bias term B is useful in our analysis because its derivative controls how imbalanced the current loss is between the classes . In training we consider the cross-entropy loss denoted as L , and have E [ ∂L ∂b0 ] = −E [ ∂L ∂b1 ] = −1 2 ( E [ Z|ℓ ( x ) = 0 ] − E [ Z|ℓ ( x ) = 1 ] ) , where Z = exp ( g1−ℓ ( x ) ( x ) ) exp ( g0 ( x ) ) +exp ( g1 ( x ) ) . Z can be regarded as a measure of wrong prediction : the numerator is the exponential of the output corresponding to the wrong label and the denominator is a normalizer . It is easy to see that Z > 12 if the prediction is wrong ; Z < 1 2 if prediction is correct . When∣∣E [ ∂L∂b0 ] ∣∣ ≈ 0 , the model ’ s loss is balanced in the sense of that ∣∣E [ Z|ℓ ( x ) = 0 ] −E [ Z|ℓ ( x ) = 1 ] ∣∣ ≈ 0 . In order to have balanced performance in every epoch , we train the model through coordinate descent instead of conventional gradient descent . Specifically , in each epoch we first update b0 and b1 until∣∣E [ ∂L∂b0 ] ∣∣ is smaller than some threshold . Then we update the other parameters . The performance of the model depends on the concentration and separation of ∆ ( x ) for ℓ ( x ) = 0 and ℓ ( x ) = 1 respectively . In Section 4 we show that ∆ ( x ) is concentrated at one of two values , denoted by µ0 and µ1 , depending only on whether the label ℓ ( x ) is 0 or 1 . The proof depends on different parameter regimes of hidden neurons . In Section 5 , we analyze the dynamics of hidden neurons throughout training to show that the concentration and separation improve at a controlled rate . Based on this information , in Section 6 we prove the main theorem . Section 7 shows some experimental results to verify our theory . The paper ends with future directions in Section 8 . 4 CONCENTRATION AND SEPARATION OF OUTPUT The difference of the logits is ∆ ( x ) = g0 ( x ) − g1 ( x ) = f0 ( x ) − f1 ( x ) + b0 − b1 = h∑ i=1 ∆i ( x ) + b0 − b1 , where ∆i ( x ) = f i 0 ( x ) − f i1 ( x ) = 4 ( βi − β′i ) n2 ( p+ q ) 2 ∑ y∈G 1 [ y ∼ x ] ϕ ( αity0 + α′it y 1 ) . For brevity , we write ∆ ( x ) as ∆ and ∆i ( x ) as ∆i . In order to estimate ∆ , we need to estimate each ∆i , 1 ≤ i ≤ h. Our fine-grained analysis of the dynamics of coordinate descent on GCNs relies on a classification of neurons into three families based on the sign and scale of the parameters : “ good type ” , “ bad type ” and “ harmless type ” . The names also indicate whether the neuron has positive contribution to the value of µ0 − µ1 , where µ0 and µ1 are high probability estimate of ∆ ( x ) for ℓ ( x ) = 0 and 1 respectively . We show that “ good type ” neuron makes positive contribution ; the contribution of “ bad type ” neuron is negative but lower bounded ; “ harmless type ” neuron ’ s contribution is non negative ( see Corollary A.4 and the remark following it ) . We will specifically describe parameter regime of each type in the following subsections . We analyze the dynamics of these types throughout coordinate descent in the next section . First we give some definitions . Definition 1 . For 1 ≤ i ≤ h , we call ( αi , α′i , βi , β′i ) the i-th neuron of the model , where ( αi , α′i ) ⊤ is the i-th column of W ( 0 ) , ( βi , β′i ) is the i-th row of W ( 1 ) . Definition 2 . We say that the i-th neuron is order-aligned if ( αi − α′i ) ( βi − β′i ) > 0 , otherwise we say it is order-misaligned . 4.1 CLASSIFICATION OF NEURON PARAMETER REGIMES We say the i-th neuron is of “ good type ” if it satisfies either ( G1 ) or ( G2 ) below . ( There is also the symmetric case obtained by switching αi with α′i and βi with β ′ i . For brevity , we only consider the cases that αi > α′i . This applies to the “ bad ” and “ harmless ” types below as well ) . Neurons in this type are order-aligned and both αi and α′i are positive or the ratio between αi and α ′ i is large enough . αi > α ′ i > 0 and βi > β ′ i ( G1 ) αi > 0 > α ′ i , ∣∣∣∣αiα′i ∣∣∣∣ > 1 and βi > β′i ( G2 ) We say the i-th neuron is of “ bad type ” if it satisfies either ( B1 ) , ( B2 ) or ( B3 ) . Neurons in this type are order-misaligned and αi , α′i are either both positive or have the opposite signs . αi > α ′ i > 0 and βi < β ′ i ( B1 ) αi > 0 > α ′ i , ∣∣∣∣αiα′i ∣∣∣∣ > qp ( 1 + log− 13 n ) and βi < β′i ( B2 ) αi > 0 > α ′ i , ∣∣∣∣αiα′i ∣∣∣∣ ≤ qp ( 1 + log− 13 n ) ( B3 ) We say that the i-th neuron is of “ harmless type ” if it satisfies either ( H1 ) or ( H2 ) : αi > 0 > α ′ i , ∣∣∣∣αiα′i ∣∣∣∣ ∈ ( qp ( 1 + log− 13 n ) , 1 ] and βi > β′i ( H1 ) αi ≤ 0 and α′i ≤ 0 ( H2 ) 4.2 CONCENTRATION AND SEPARATION Theorem 4.1 . If the i-th neuron is of “ good type ” satisfying ( G1 ) or of “ bad type ” satisfying ( B1 ) , then for ℓ ( x ) = 0 : P [ ∣∣∆i − λ ( βi − β′i ) ( p+ q ) 2 [ ( p2 + q2 ) αi + 2pqα ′ i ] ∣∣ ≤ ( αi − α′i ) ( βi − β′i ) O ( log − 12 n ) |ℓ ( x ) = 0 ] ≥ 1−O ( 1 n2 ) , for ℓ ( x ) = 1 : P [ ∣∣∆i − λ ( βi − β′i ) ( p+ q ) 2 [ 2pqαi + ( p 2 + q2 ) α′i ] ∣∣ ≤ ( αi − α′i ) ( βi − β′i ) O ( log − 12 n ) |ℓ ( x ) = 1 ] ≥ 1−O ( 1 n2 ) . Similar concentration hold for neurons satisfying ( G2 ) , ( B2 ) and ( B3 ) , and for neurons of “ harmless type. ” We apply the method of bounded differences to show the concentration . The details are shown in the appendix . Given the concentration of ∆i for each type of neurons , we estimate the concentration of the output ∆ = ∑h i=1 ∆i + b0 − b1 . For the i-th neuron , we denote the high-probability estimate of ∆i given in the statement of Theorem 4.1 as mi0 when ℓ ( x ) = 0 and m i 1 when ℓ ( x ) = 1 . By union bound , we have the following corollary . Corollary 4.2 . Given a vertex x ∈ G with label unrevealed , we have P [ |∆− µj | ≤ δ|ℓ ( x ) = j ] ≥ 1−O ( 1 n ) , ( 3 ) where µj = ( h∑ i=1 mij ) + b0 − b1 , j = 0 , 1 δ = h∑ i=1 |αi − α′i||βi − β′i|O ( log − 12 n ) . For any ϵ > 0 , we require the probability of concentration in ( 3 ) to be at least 1− ϵ̃ , where ϵ̃ = o ( ϵ ) . If we choose ϵ̃ = ϵ2 , then we set 1−O ( 1n ) ≥ 1− ϵ 2 , i.e.n ≥ Ω ( 1ϵ ) 2 . Our following analysis will be based on this condition . From Theorem 4.1 , we have the following result about the value of mi0 −mi1 . Corollary 4.3 . • If the i-th neuron is of “ good type ” and satisfies ( G1 ) , then mi0 −mi1 = λ|αi − α′i||βi − β′i| ( p− q p+ q ) 2 . • If the i-th neuron is of “ bad type ” and satisfies ( B1 ) , then mi0 −mi1 = −λ|αi − α′i||βi − β′i| ( p− q p+ q ) 2 . • If the i-th neuron is of “ harmless type ” and satisfies ( H1 ) , then mi0 −mi1 = λ|βi − β′i||pαi + qα′i| p− q ( p+ q ) 2 . Similar results for neurons satisfying ( G2 ) , ( B2 ) , ( B3 ) and ( H1 ) are stated in the appendix , along with the proof . Remark . • As we can see from Corollary 4.3 , the value of mi0 − mi1 is positive for “ good type ” neurons , non-negative for “ harmless type ” neurons and may be negative ( but lower bounded ) for “ bad type ” neurons . Since positive values of mi0 −mi1 decrease the loss of the model , this explains the names for the types of neurons . • mi0 −mi1 is proportional to |αi −α′i||βi − β′i| . In the next section , we analyze the dynamics of the parameters αi , α′i , βi , β ′ i . Using our understanding of these dynamics , in Theorem 6.2 we present a refined result about the separation of output which only depends on the initialization of parameters . • Let c : = µ0 − µ1 = ∑h i=1 ( m i 0 −mi1 ) . By the two corollaries above , we have δ = o ( |c| ) . The balanced loss guaranteed by the bias term and the coordinate descent scheme ensure that µ0 = Ω ( c ) and µ1 = Ω ( c ) . It then follows that if the loss is sufficiently small , both µ0 and µ1 have correct sign , i.e . µ0 > 0 > µ1 . ( Otherwise , due to concentration of the output , the model makes wrong prediction and the loss is large ) . So we will eventually have δ = o ( µ0 ) and δ = o ( |µ1| ) . 5 DYNAMICS OF PARAMETERS In this section , we describe the dynamics of each type of neurons through coordinate descent , which can be visualized in the following figure in which the arrows indicate movement between types that can happen with non-negligible probability . There are two noteworthy points from this figure . First , “ good type ” parameters are preserved under coordinate descent . Second , there are no arrows coming into “ bad type ” except from itself . These dynamics are proved by estimating the gradient with respect to the loss function for each type of neuron . Because of the non-linearity of the activation , we rely heavily on the concentration result proved above to get tight estimates . Without these concentration results , even estimating the sign of the gradient seems difficult . The proof and experiments about the dynamics of hidden neurons are deferred to the appendix . 6 LEARNING GUARANTEE In this section , we prove our main result which states that with high probability a trained GCN can detect communities in SBM with any desired accuracy . The proof is based on the following theorem which shows that if µ0 and µ1 are separated enough , then the model achieves high accuracy . Theorem 6.1 . ∀ϵ > 0 , provided that the difference between µ0 and µ1 is large enough : σ ( −µ0−µ12 ) < ϵ 2 , if ∣∣E [ ∂L∂b0 ] ∣∣ < ϵ4 , then P [ ∆ < 0|ℓ ( x ) = 0 ] < 4ϵ , P [ ∆ > 0|ℓ ( x ) = 1 ] < 4ϵ , where σ ( x ) : = 11+exp ( −x ) represents the sigmoid function . Next we show that the model can achieve such separation between µ0 and µ1 through coordinate descent . In order to make constant update of parameters at every epoch , we set an adaptive learning rate ηk = 1E [ Z ( k ) ] where Z ( k ) is the value of Z at the k-th epoch . We first refine Corollary 4.3 about the separation of output for each type of neuron ( mi0 −mi1 ) using the dynamics of parameters . Theorem 6.2 ( separation of output ) . Let mi0 and mi1 be defined as in Section 4 , train the model for k epochs by the defined coordinate descent with adaptive learning rate ηk = 1E [ Z ( k ) ] , • if the i-th neuron is of “ good type ” , then mi0 −mi1 ≥ A ( 0 ) i B ( 0 ) i λ 2 ( p− q p+ q ) 2 ( 1 + √ 2λ 8 ( p− q p+ q ) 2 ) 2k • if the i-th neuron is of “ bad type ” , then mi0 −mi1 ≥ −k ( ( A ( 0 ) i ) 2 + ( B ( 0 ) i ) 2 ) λp ( p− q ) ( p+ q ) 2 , • if the i-th neuron is of “ harmless type ” , then mi0 −mi1 ≥ 0 , where A ( 0 ) i = α ( 0 ) i − α ′ ( 0 ) i , B ( 0 ) i = β ( 0 ) i − β ′ ( 0 ) i . Next we present a result about initialization , which shows that with high probability , there are enough “ good type ” neurons and parameters have appropriate scale . Lemma 6.3 . Suppose all parameters in W ( 0 ) and W ( 1 ) are initialized independently following standard normal distribution . Then the number hg of neurons initialized as “ good type ” satisfies P [ hg ≥ h8 ] ≥ 1− exp ( − h 64 ) . Furthermore , P [ h∑ i=1 ( αi−α′i ) 2+ ( βi−β′i ) 2 ≤ 5h ] ≥ 1−O ( 1 h ) , P [ ∑ the i-th neuron initialized as “ good type ” |αi−α′i||βi−β′i| ≥ h 80 ] ≥ 1−O ( 1 h ) . Now we can prove the final result . Proof of Theorem 2.2 . First we show that if the loss E [ Z ] is small enough , the model achieves desired accuracy . Indeed , if E [ Z ] < 2ϵ , since E [ Z ] = E [ Z|pred is wrong ] P [ pred is wrong ] +E [ Z|pred is correct ] P [ pred is correct ] ≥ 1 2 P [ pred is wrong ] , we have P [ pred is wrong ] ≤ 4ϵ , i.e. , P [ pred is correct ] > 1− 4ϵ . Otherwise , E [ Z ] ≥ 2ϵ , since E [ Z ] = 12 ( E [ Z|ℓ ( z ) = 0 ] + E [ Z|ℓ ( z ) = 1 ] ) , we have E [ Z|ℓ ( z ) = 0 ] +E [ Z|ℓ ( z ) = 1 ] ≥ 4ϵ . On the other hand , ∣∣E [ ∂L∂b0 ] ∣∣ < ϵ implies that ∣∣E [ Z|ℓ ( z ) = 0 ] −E [ Z|ℓ ( z ) = 1 ] ∣∣ < 2ϵ . By Theorem 6.2 , µ0 − µ1 = h∑ i=1 ( mi0 −mi1 ) = ∑ i∈ “ good ” ( mi0 −mi1 ) + ∑ i∈ “ bad ” ( mi0 −mi1 ) + ∑ i∈ “ harmless ” ( mi0 −mi1 ) ≥ λ 2 ( p− q p+ q ) 2 ( 1 + √ 2λ 8 ( p− q p+ q ) 2 ) 2k ∑ i∈ “ good ” A ( 0 ) i B ( 0 ) i − k λp ( p− q ) ( p+ q ) 2 ∑ i∈ “ bad ” ( ( A ( 0 ) i ) 2 + ( B ( 0 ) i ) 2 ) . By Lemma 6.3 , with probability ≥ 1−O ( 1h ) , ∑ i∈ “ good ” A ( 0 ) i B ( 0 ) i ≥ h 80 , ∑ i∈ “ bad ” ( ( A ( 0 ) i ) 2 + ( B ( 0 ) i ) 2 ) ≤ 5h . Since h ≥ 1δ , then with probability ≥ 1− δ , µ0 − µ1 ≥ h ( λ 160 ( p− q p+ q ) 2 ( 1 + √ 2λ 8 ( p− q p+ q ) 2 ) 2k − k 5λp ( p− q ) ( p+ q ) 2 ) ≥ h ( C1 ( 1 + C2 ) 2k − C3k ) , ( 4 ) where C1 , C2 and C3 are constants determined by p , q and λ . By Theorem 6.1 , if ( 4 ) ≥ 2 log 2ϵ ( then σ ( − µ0−µ1 2 ) ≤ ϵ 2 ) , then the model achieves accuracy ≥ 1− 4ϵ . It ’ s sufficient to have C1 ( 1 + C2 ) 2k − C3k ≥ 2 log 2 ϵ , i.e . k = O ( log log 1ϵ ) . 7 EXPERIMENTS We show some experiments verifying Theorem 2.2 . In particular , our experiments demonstrate that accuracy increases with n , the probability of high-accuracy models increases with h , and coordinate descent is able to recovery high-accuracy models in the sparse regime of Assumption 2.1 . Additional plots demonstrating the dynamics of hidden neurons with their ratios and differences can be seen in the appendix . Experiment 1 In this experiment , we plot the an estimate of the accuracy versus epoch for varying n. The parameters p , q of SBM follow Assumption 2.1 , where we choose a = 1.0 and b = 0.7 . We set h = 20 , λ = 0.3 and run 40 independent experiments for n = 250 , 500 and 1000 respectively . In each experiment we train the model for 100 epochs . The training set has 40 randomly generated graphs from SBM ( n , p , q ) . We validate the performance by the percentage of correct predictions on 200 random vertices , each from a randomly generated graph . The result is shown Figure 2 . The shaded region for each n is obtained from the max , min and mean percentage of the 40 experiments . The result verifies Theorem 2.2 which shows that the accuracy of the model increases with n. Experiment 2 In this experiment , we show the effect of the number of hidden neurons h. The parameters of SBM are the same as Experiment 1 . We set h = 2 , 5 , 20 . For each pair of ( n , h ) we run 40 independent experiments and show the distribution of validation in Figure 3 . From the top row to the bottom , n increases from 250 to 1000 . From the left column to the right , h increases from 2 to 20 . In each plot , the x-axis represents the accuracy , while y-axis represents the count of experiments . According to Theorem 2.2 , the probability of achieving high accuracy is 1−O ( 1/h ) and the accuracy increases with n. We can see that in each row of Figure 3 , as h increases , we have lager probability to achieve high accuracy ; in each column , as n increases , the model achieves higher accuracy . The results verify our theory in the paper . 8 FUTURE DIRECTIONS Graph neural networks offer a promising setting for progress on the more general theory of deep learning , because random graph models more plausibly capture the structure of real-world data compared to , e.g. , the Gaussian inputs often used to prove deep learning guarantees for traditional feed-forward neural networks . This paper has initiated the project of proving training guarantees for semi-supervised learning using GCNs on SBM models , but much more work remains to be done . Arguably the sparsest SBM models ( expected constant degree ) are the most compelling from the perspective of modeling real-world communities , so it would be interesting to extend these results to that setting . Models with more than two blocks , or overlapping communities ( Petti & Vempala , 2018 ) would be even closer to real-world structure . We hope this initial step spurs further interest in provable guarantees for training neural networks using plausible models of real-world data as the input distribution . REFERENCES Emmanuel Abbe and Colin Sandon . Recovering communities in the general stochastic block model without knowing the parameters . arXiv preprint arXiv:1506.03729 , 2015 . Emmanuel Abbe , Afonso S. Bandeira , and Georgina Hall . Exact recovery in the stochastic block model , 2014 . Alon Brutzkus and Amir Globerson . Globally optimal gradient descent for a convnet with gaussian inputs . In International conference on machine learning , pp . 605–614 . PMLR , 2017 . Jianfei Chen , Jun Zhu , and Le Song . Stochastic training of graph convolutional networks with variance reduction . arXiv preprint arXiv:1710.10568 , 2017 . Jie Chen , Tengfei Ma , and Cao Xiao . Fastgcn : fast learning with graph convolutional networks via importance sampling . arXiv preprint arXiv:1801.10247 , 2018 . Sitan Chen , Adam R. Klivans , and Raghu Meka . Learning deep relu networks is fixed-parameter tractable , 2020a . Zhengdao Chen , Xiang Li , and Joan Bruna . Supervised community detection with line graph neural networks , 2020b . Mikhail Drobyshevskiy and Denis Turdakov . Random graph modeling : A survey of the concepts . ACM Comput . Surv. , 52 ( 6 ) :1–36 , December 2019 . Hongyang Gao , Zhengyang Wang , and Shuiwang Ji . Large-scale learnable graph convolutional networks . In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining , pp . 1416–1424 , 2018 . Vikas Garg , Stefanie Jegelka , and Tommi Jaakkola . Generalization and representational limits of graph neural networks . In International Conference on Machine Learning , pp . 3419–3430 . PMLR , 2020 . Rong Ge , Jason D. Lee , and Tengyu Ma . Learning one-hidden-layer neural networks with landscape design . CoRR , abs/1711.00501 , 2017 . Surbhi Goel , Aravind Gollakota , Zhihan Jin , Sushrut Karmalkar , and Adam Klivans . Superpolynomial lower bounds for learning one-layer neural networks using gradient descent . In International Conference on Machine Learning , pp . 3587–3596 . PMLR , 2020 . Paul W Holland , Kathryn Blackmond Laskey , and Samuel Leinhardt . Stochastic blockmodels : First steps . Social networks , 5 ( 2 ) :109–137 , 1983 . Tatsuro Kawamoto , Masashi Tsubaki , and Tomoyuki Obuchi . Mean-field theory of graph neural networks in graph partitioning . Journal of Statistical Mechanics : Theory and Experiment , 2019 ( 12 ) : 124007 , dec 2019. doi : 10.1088/1742-5468/ab3456 . URL https : //doi.org/10.1088/ 1742-5468/ab3456 . Thomas N. Kipf and Max Welling . Semi-supervised classification with graph convolutional networks , 2017 . R. Kumar , P. Raghavan , S. Rajagopalan , D. Sivakumar , A. Tomkins , and E. Upfal . Stochastic models for the web graph . In Proceedings 41st Annual Symposium on Foundations of Computer Science , pp . 57–65 , 2000. doi : 10.1109/SFCS.2000.892065 . Qimai Li , Zhichao Han , and Xiao-Ming Wu . Deeper insights into graph convolutional networks for semi-supervised learning . In Proceedings of the AAAI Conference on Artificial Intelligence , volume 32 , 2018 . Christopher Morris , Martin Ritzert , Matthias Fey , William L Hamilton , Jan Eric Lenssen , Gaurav Rattan , and Martin Grohe . Weisfeiler and leman go neural : Higher-order graph neural networks . In Proceedings of the AAAI Conference on Artificial Intelligence , volume 33 , pp . 4602–4609 , 2019 . Samantha Petti and Santosh S. Vempala . Approximating sparse graphs : The random overlapping communities model , 2018 . Ryoma Sato . A survey on the expressive power of graph neural networks , 2020 . Ryoma Sato , Makoto Yamada , and Hisashi Kashima . Approximation ratios of graph neural networks for combinatorial problems . arXiv preprint arXiv:1905.10261 , 2019 . Santosh Vempala and John Wilmes . Gradient descent for one-hidden-layer neural networks : Polynomial convergence and sq lower bounds . In Conference on Learning Theory , pp . 3115–3117 . PMLR , 2019 . Z. Wu , S. Pan , F. Chen , G. Long , C. Zhang , and P. S. Yu . A comprehensive survey on graph neural networks . IEEE Transactions on Neural Networks and Learning Systems , 32 ( 1 ) :4–24 , 2021 . Keyulu Xu , Weihua Hu , Jure Leskovec , and Stefanie Jegelka . How powerful are graph neural networks ? arXiv preprint arXiv:1810.00826 , 2018 . Prateek Yadav , Madhav Nimishakavi , Naganand Yadati , Shikhar Vashishth , Arun Rajkumar , and Partha Talukdar . Lovasz convolutional networks . In The 22nd International Conference on Artificial Intelligence and Statistics , pp . 1978–1987 . PMLR , 2019 . Chiyuan Zhang , Samy Bengio , Moritz Hardt , Benjamin Recht , and Oriol Vinyals . Understanding deep learning requires rethinking generalization . arXiv preprint arXiv:1611.03530 , 2016 . Xiao Zhang , Yaodong Yu , Lingxiao Wang , and Quanquan Gu . Learning one-hidden-layer relu networks via gradient descent . In The 22nd International Conference on Artificial Intelligence and Statistics , pp . 1524–1534 . PMLR , 2019a . Yingxue Zhang , Soumyasundar Pal , Mark Coates , and Deniz Ustebay . Bayesian graph convolutional neural networks for semi-supervised classification . Proceedings of the AAAI Conference on Artificial Intelligence , 33 ( 01 ) :5829–5836 , 2019b . Jie Zhou , Ganqu Cui , Zhengyan Zhang , Cheng Yang , Zhiyuan Liu , Lifeng Wang , Changcheng Li , and Maosong Sun . Graph neural networks : A review of methods and applications . arXiv preprint arXiv:1812.08434 , 2018 . A CONCENTRATION AND SEPARATION OF OUTPUT Let N ( x , 0 ) , N ( x , 1 ) denote the neighborhood of vertex x with label 0 and 1 respectively , i.e . N ( x , 0 ) = { y ∈ G , y ∼ x , ℓ ( y ) = 0 } , N ( x , 1 ) = { y ∈ G , y ∼ x , ℓ ( y ) = 1 } . By the definition of SBM , both |N ( x , 0 ) | and |N ( x , 1 ) | are binomial random variables . For ℓ ( x ) = 0 , |N ( x , 0 ) | ∼ B ( n2 , p ) , |N ( x , 1 ) | ∼ B ( n 2 , q ) and for ℓ ( x ) = 1 , |N ( x , 0 ) | ∼ B ( n 2 , q ) , |N ( x , 1 ) | ∼ B ( n 2 , p ) . Moreover , ty0 and t y 1 are also binomial random variables , for ℓ ( y ) = 0 , t y 0 ∼ B ( nλ2 , p ) , t y 1 ∼ B ( nλ2 , q ) , similarly for ℓ ( y ) = 1 . Our following analysis is based on the condition that |N ( x , 0 ) | , |N ( x , 1 ) | , tx0 and tx1 are in their high probability range for all x ∈ G. Specifically we require the condition that for all ℓ ( x ) = 0 , ( similar conditions for ℓ ( x ) = 1 are omitted ) : ∣∣∣∣|N ( x , 0 ) | − np2 ∣∣∣∣ ≤ O ( np ) 56 , ∣∣∣∣|N ( x , 1 ) | − nq2 ∣∣∣∣ ≤ O ( nq ) 56 ; ( Cond ) ∣∣∣∣tx0 − nλp2 ∣∣∣∣ ≤ O ( np ) 56 , ∣∣∣∣tx1 − nλq2 ∣∣∣∣ ≤ O ( nq ) 56 . By tail bound of binomial random variables and union bound , we have P [ ( Cond ) ] ≥ 1− 1 n2 . Under this condition , we show the concentration of ∆i for each type . A.1 “ GOOD TYPE ” NEURONS For convenience , according to the activation pattern of ϕ ( αit y 0 + α ′ it y 1 ) , we further divide ( G2 ) into subcases ( G2,1 ) , ( G2,2 ) and ( G2,3 ) by to the ratio of ∣∣αi α′i ∣∣ . For example , in ( G1 ) and ( G2,1 ) , ϕ ( αit y 0 + α ′ it y 1 ) is active for both ℓ ( y ) = 0 and ℓ ( y ) = 1 ; in ( G2,1 ) , it is only active for ℓ ( y ) = 0.∣∣∣∣αiα′i ∣∣∣∣ > pq ( 1 + log− 13 n ) ( G2,1 ) 1 < ∣∣∣∣αiα′i ∣∣∣∣ < pq ( 1− log− 13 n ) ( G2,2 ) p q ( 1− log− 1 3 n ) ≤ ∣∣∣∣αiα′i ∣∣∣∣ ≤ pq ( 1 + log− 13 n ) ( G2,3 ) We have the following estimation of ∆i in “ good type ” . Theorem A.1 ( concentration of output from “ good type ” neurons ) . If the i-th neuron is of “ good type ” , then • in both ( G1 ) and ( G2,1 ) : P [ ∣∣∆i− λ ( βi − β′i ) ( p+ q ) 2 [ ( p2+ q2 ) αi+2pqα ′ i ] ∣∣ ≤ ( αi−α′i ) ( βi−β′i ) O ( log− 12 n ) |ℓ ( x ) = 0 ] ≥ 1−O ( 1 n2 ) P [ ∣∣∆i− λ ( βi − β′i ) ( p+ q ) 2 [ 2pqαi+ ( p 2+ q2 ) α′i ] ∣∣ ≤ ( αi−α′i ) ( βi−β′i ) O ( log− 12 n ) |ℓ ( x ) = 1 ] ≥ 1−O ( 1 n2 ) • in ( G2,2 ) : P [ ∣∣∆i − λ ( βi − β′i ) p ( p+ q ) 2 ( pαi + qα ′ i ) ∣∣ ≤ ( αi − α′i ) ( βi − β′i ) O ( log− 12 n ) |ℓ ( x ) = 0 ] ≥ 1−O ( 1 n2 ) P [ ∣∣∆i − λ ( βi − β′i ) q ( p+ q ) 2 ( pαi + qα ′ i ) ∣∣ ≤ ( αi − α′i ) ( βi − β′i ) O ( log− 12 n ) |ℓ ( x ) = 1 ] ≥ 1−O ( 1 n2 ) • in ( G2,3 ) : P [ ∣∣∆i − λ ( βi − β′i ) p ( p+ q ) 2 ( pαi + qα ′ i ) ∣∣ ≤ ( αi − α′i ) ( βi − β′i ) O ( log− 12 n ) |ℓ ( x ) = 0 ] ≥ 1−O ( 1 n2 ) P [ ∣∣∆i − λ ( βi − β′i ) q ( p+ q ) 2 ( pαi + qα ′ i ) ∣∣ ≤ ( αi − α′i ) ( βi − β′i ) O ( log− 12 n ) |ℓ ( x ) = 1 ] ≥ 1−O ( 1 n2 ) . Proof . We have ∆i = ( βi − β′i ) ∑ y∈G 1 [ y ∼ x ] 4ϕ ( αit y 0+α ′ it y 1 ) n2 ( p+q ) 2 . We apply the method of averaged bounded difference [ book ] to estimate ∆i . In different parameter regimes , ϕ ( αit y 0 + α ′ it y 1 ) has different activation patterns . In ( G1 ) and ( G2,1 ) , ϕ ( αit y 0 + α ′ it y 1 ) is active with probability 1 − O ( 1n2 ) for both ℓ ( y ) = 0 and ℓ ( y ) = 1 . For ℓ ( x ) = 0 , at first we estimate E [ ∆i ] . By condition ( Cond ) : ∣∣∣∣E [ ∆i ] − λ ( βi − β′i ) ( p+ q ) 2 [ ( p2 + q2 ) αi + 2pqα′i ] ∣∣∣∣ ≤ ( αi − α′i ) ( βi − β′i ) O ( log−1 n ) . Let Yj = 4ϕ ( αit yj 0 +α ′ it yj 1 ) n2 ( p+q ) 2 , then ∆i = ( βi − β ′ i ) ∑ j Yj . Based on condition ( Cond ) , ∣∣Yj − 2λ ( pαi+qα ′ i ) n ( p+q ) 2 ∣∣ ≤ ( αi − α′i ) O ( log− 72 n ) for ℓ ( yj ) = 0 . For any ak , a′k , ∣∣∣∣E [ Yk|Y1 , · · · , Yk−1 , Yk = ak ] − E [ Yk|Y1 , · · · , Yk−1 , Yk = a′k ] ∣∣∣∣ ≤ ( αi − α′i ) O ( log− 72 n ) . Moreover , when the number of vertices with revealed labels are fixed , ∣∣∣∣E [ Yj |Y1 , · · · , Yk−1 , Yk = ak ] − E [ Yj |Y1 , · · · , Yk−1 , Yk = a′k ] ∣∣∣∣ ≤ ( αi − α′i ) O ( log−6 n ) , for j ≥ k. By condition ( Cond ) , there are at most O ( log3 n ) non-zero terms for Yk , 1 ≤ k ≤ n. So∣∣∣∣E [ ∑ j Yj |Y1 , · · · , Yk−1 , Yk = ak ] − E [ ∑ j Yj |Y1 , · · · , Yk−1 , Yk = a′k ] ∣∣∣∣ ≤ ( αi − α′i ) O ( log− 72 n ) , for 1 ≤ k ≤ n. By the method of averaged bounded difference , we have P [ ∣∣∆i−λ ( βi − β′i ) ( p+ q ) 2 [ ( p2+q2 ) αi+2pqα ′ i ] ∣∣ ≤ ( αi−α′i ) ( βi−β′i ) O ( log n ) − 12 |ℓ ( x ) = 0 ] ≥ 1−O ( 1n2 ) . Other regimes can be proved similarly . A.2 “ BAD TYPE ” NEURONS For convenience of our analysis , we further divide ( B2 ) into subcases ( B2,1 ) , ( B2,2 ) and ( B2,3 ) according to the ratio of ∣∣αi α′i ∣∣ ∣∣∣∣αiα′i ∣∣∣∣ > pq ( 1 + log− 13 n ) ( B2,1 ) ∣∣∣∣αiα′i ∣∣∣∣ ∈ ( qp ( 1 + log− 13 n ) , pq ( 1− log− 13 n ) ] ( B2,2 ) ∣∣∣∣αiα′i ∣∣∣∣ ∈ ( pq ( 1− log− 13 n ) , pq ( 1 + log− 13 n ) ] ( B2,3 ) We have the following estimation of ∆i in “ bad type ” . Theorem A.2 ( concentration of output from “ bad type ” neurons ) . If the i-th neuron is of “ bad type ” , we have : • in ( B1 ) and ( B2,1 ) : P [ ∣∣∆i − λ ( βi − β′i ) ( p+ q ) 2 [ ( p2 + q2 ) αi + 2pqα ′ i ] ∣∣ ≤ |αi − α′i||βi − β′i|O ( log− 12 n ) |ℓ ( x ) = 0 ] ≥ 1−O ( 1 n2 ) P [ ∣∣∆i − λ ( βi − β′i ) ( p+ q ) 2 [ 2pqαi + ( p 2 + q2 ) α′i ] ∣∣ ≤ |αi − α′i||βi − β′i|O ( log− 12 n ) |ℓ ( x ) = 1 ] ≥ 1−O ( 1 n2 ) • in ( B2,2 ) : P [ ∣∣∆i − λ ( βi − β′i ) p ( p+ q ) 2 ( pαi + qα ′ i ) ∣∣ ≤ |αi − α′i||βi − β′i|O ( log− 12 n ) |ℓ ( x ) = 0 ] ≥ 1−O ( 1 n2 ) P [ ∣∣∆i − λ ( βi − β′i ) q ( p+ q ) 2 ( pαi + qα ′ i ) ∣∣ ≤ |αi − α′i||βi − β′i|O ( log− 12 n ) |ℓ ( x ) = 1 ] ≥ 1−O ( 1 n2 ) • in ( B2,3 ) : P [ ∣∣∆i − λ ( βi − β′i ) p ( p+ q ) 2 ( pαi + qα ′ i ) ∣∣ ≤ |αi − α′i||βi − β′i|O ( log− 12 n ) |ℓ ( x ) = 0 ] ≥ 1−O ( 1 n2 ) P [ ∣∣∆i − λ ( βi − β′i ) q ( p+ q ) 2 ( pαi + qα ′ i ) ∣∣ ≤ |αi − α′i||βi − β′i|O ( log− 12 n ) |ℓ ( x ) = 1 ] ≥ 1−O ( 1 n2 ) . • in ( B3 ) : P [ ∣∣∆i∣∣ ≤ |αi − α′i||βi − β′i|O ( log− 12 n ) |ℓ ( x ) = 0 or 1 ] ≥ 1−O ( 1n2 ) . Proof . The proof is similar as Theorem A.1 A.3 “ HARMLESS TYPE ” NEURONS We have the following estimation of ∆i in “ harmless type ” . Theorem A.3 ( concentration of output from “ harmless type ” neurons ) . If the i-th neuron is of “ harmless type ” , we have : • in ( H1 ) : P [ ∣∣∆i − λ ( βi − β′i ) p ( p+ q ) 2 ( pαi + qα ′ i ) ∣∣ ≤ ( αi − α′i ) ( βi − β′i ) O ( log− 12 n ) |ℓ ( x ) = 0 ] ≥ 1−O ( 1 n2 ) P [ ∣∣∆i − λ ( βi − β′i ) q ( p+ q ) 2 ( pαi + qα ′ i ) ∣∣ ≤ ( αi − α′i ) ( βi − β′i ) O ( log− 12 n ) |ℓ ( x ) = 1 ] ≥ 1−O ( 1 n2 ) • in ( H2 ) : ∆i = 0 for both ℓ ( x ) = 0 and 1 . A.4 SEPARATION OF OUTPUT Previous subsections have shown the concentration of ∆i for each type of neurons . For the i-th neuron , we write the concentrated value as mi0 if ℓ ( x ) = 0 and m i 1 if ℓ ( x ) = 1 . From Theorem A.1 , A.2 and A.3 , we have the following result about the value of mi0 − mi1 by straightforward computation . Corollary A.4 . We have the following result about mi0 −mi1 for 1 ≤ i ≤ h : • if the i-the neuron is of “ good type ” : in ( G1 ) and ( G2,1 ) : mi0 −mi1 = λ|αi − α′i||βi − β′i| ( p− q p+ q ) 2 in ( G2,2 ) : mi0 −mi1 = λ|βi − β′i||pαi + qα′i| p− q ( p+ q ) 2 ≥ λ 2 |αi − α′i||βi − β′i| ( p− q p+ q ) 2 in ( G2,3 ) : mi0 −mi1 = λ|βi − β′i||pαi + qα′i| p− q ( p+ q ) 2 ≥ λ|αi − α′i||βi − β′i| ( p− q ) Λ3 ( p+ q ) 2 , where Λ3 = ( 1−log− 1 3 n ) p2−q2 ( 1−log− 1 3 n ) p+q . • if the i-the neuron is of “ bad type ” : in ( B1 ) and ( B2,1 ) : mi0 −mi1 = −λ|αi − α′i||βi − β′i| ( p− q p+ q ) 2 in ( B2,2 ) : mi0 −mi1 = −λ|βi − β′i||pαi + qα′i| p− q ( p+ q ) 2 ≥ −λ|αi − α′i||βi − β′i| ( p− q ) Λ3 ( p+ q ) 2 in ( B2,3 ) : mi0 −mi1 = −λ|βi − β′i||pαi + qα′i| p− q ( p+ q ) 2 ≥ −λ|αi − α′i||βi − β′i| ( p− q ) Λ1 ( p+ q ) 2 in ( B3 ) : mi0 −mi1 = 0 , where Λ1 = ( 1+log− 1 3 n ) p2−q2 ( 1+log− 1 3 n ) p+q , Λ3 = ( 1−log− 1 3 n ) p2−q2 ( 1−log− 1 3 n ) p+q . • if the i-the neuron is of “ harmless type ” : in ( H1 ) : mi0 −mi1 = λ|βi − β′i||pαi + qα′i| p− q ( p+ q ) 2 ≥ λ|αi − α′i||βi − β′i| ( p− q ) Λ5 ( p+ q ) 2 in ( H2 ) : mi0 −mi1 = 0 , where Λ5 = pq log − 1 3 p+ ( 1+log− 1 3 ) q . B DYNAMICS OF PARAMETERS We consider the cross-entropy loss in training . The loss on a particular vertex x is L ( x ) = − logOℓ ( x ) ( x ) , where O0 ( x ) and O1 ( x ) are the first and second component of the output respectively , i.e . O0 ( x ) = exp ( g0 ( x ) ) exp ( g0 ( x ) ) + exp ( g1 ( x ) ) , O1 ( x ) = exp ( g1 ( x ) ) exp ( g0 ( x ) ) + exp ( g1 ( x ) ) . For a given graph G generated by SBM , we set the objective function L ( G ) as the average loss over all the vertices with revealed labels2 , i.e . L ( G ) = 1 # { x ∈ G : ℓ ( x ) is revealed } ∑ x : ℓ ( x ) revealed L ( x ) . We first show the partial derivatives of parameters . Theorem B.1 ( derivatives of parameters ) . For 1 ≤ i ≤ h , let x be a vertex , ℓ ( x ) its true label , L ( x ) = − logOℓ ( x ) ( x ) , then ∂L ∂αi = 4 n2 ( p+ q ) 2 ( βi − β′i ) Z ( −1 ) 1−ℓ ( x ) ∑ y 1 [ y ∼ x ] 1 [ αity0 + α′it y 1 ≥ 0 ] t y 0 ∂L ∂α′i = 4 n2 ( p+ q ) 2 ( βi − β′i ) Z ( −1 ) 1−ℓ ( x ) ∑ y 1 [ y ∼ x ] 1 [ αity0 + α′it y 1 ≥ 0 ] t y 1 ∂L ∂βi = 4 n2 ( p+ q ) 2 Z ( −1 ) 1−ℓ ( x ) ∑ y 1 [ y ∼ x ] ϕ ( αity0 + α′it y 1 ) ∂L ∂β′i = − 4 n2 ( p+ q ) 2 Z ( −1 ) 1−ℓ ( x ) ∑ y 1 [ y ∼ x ] ϕ ( αity0 + α′it y 1 ) ∂L ∂b0 = ( −1 ) 1−ℓ ( x ) Z ∂L ∂b1 = ( −1 ) ℓ ( x ) Z , where Z = exp ( g1−ℓ ( x ) ( x ) ) exp ( g0 ( x ) ) +exp ( g1 ( x ) ) , t y 0 and t y 1 are the numbers of neighbors of y ( including perhaps y itself ) with revealed labels in class 0 and class 1 respectively . Proof . We compute ∂L∂αi , ∂L ∂βi and ∂L∂b0 , others can be computed symmetrically . We have L ( x ) = − logOℓ ( x ) ( x ) = log ( exp ( g0 ( x ) ) + exp ( g1 ( x ) ) ) − gℓ ( x ) ( x ) , since Oj ( x ) = exp ( gj ( x ) ) exp ( g0 ( x ) ) +exp ( g1 ( x ) ) , j = 0 , 1 . So ∂L ∂αi = eg0 ( x ) ∂g0 ( x ) ∂αi + e g1 ( x ) ∂g1 ( x ) ∂αi eg0 ( x ) + eg1 ( x ) − ∂gℓ ( x ) ( x ) ∂αi = ( −1 ) 1−ℓ ( x ) Z ( ∂g0 ( x ) ∂αi − ∂g1 ( x ) ∂αi ) . Since gj ( x ) = fj ( x ) + bj , ∂gj ( x ) ∂αi = ∂fj ( x ) ∂αi , j = 0 , 1 . By ( 2 ) ∂f0 ( x ) ∂αi = 4 n2 ( p+ q ) 2 ∑ y 1 [ y ∼ x ] βi1 [ αity0 + α′it y 1 ≥ 0 ] t y 0 ∂f1 ( x ) ∂αi = 4 n2 ( p+ q ) 2 ∑ y 1 [ y ∼ x ] β′i1 [ αit y 0 + α ′ it y 1 ≥ 0 ] t y 0 . Therefore ∂L ∂αi = 4 n2 ( p+ q ) 2 ( −1 ) 1−ℓ ( x ) Z ( βi − β′i ) ∑ y 1 [ y ∼ x ] 1 [ αity0 + α′it y 1 ≥ 0 ] t y 0 . 2We abuse the notation L for L ( x ) and L ( G ) , but the meaning is clear from the context . Next we compute ∂L∂βi . Similar as above , ∂L ∂βi = ( −1 ) 1−ℓ ( x ) Z ( ∂f0 ( x ) ∂βi − ∂f1 ( x ) ∂βi ) . By ( 2 ) ∂f0 ( x ) ∂βi = 4 n2 ( p+ q ) 2 ∑ y 1 [ y ∼ x ] ϕ ( αity0 + α′it y 1 ) ∂f1 ( x ) ∂αi = 0 . So ∂L ∂βi = 4 n2 ( p+ q ) 2 ( −1 ) 1−ℓ ( x ) Z ∑ y 1 [ y ∼ x ] ϕ ( αity0 + α′it y 1 ) . Lastly , ∂L ∂b0 = ( −1 ) 1−ℓ ( x ) Z ( ∂g0 ( x ) ∂b0 − ∂g1 ( x ) ∂b0 ) = ( −1 ) 1−ℓ ( x ) Z , since ∂g0 ( x ) ∂b0 = 1 , ∂g1 ( x ) ∂b0 = 0 . In the following , we will use Theorem B.1 to analyze the dynamics of neurons of each type . As we can see , all of ∂L∂αi , ∂L ∂α′i , ∂L∂βi and ∂L ∂β′i have the form Y Z . In order to estimate these derivatives , we show the concentration of Y and Z respectively . To estimate the concentration of Z , we need the concentration of output obtained in Section 4 . For any ϵ > 0 , we require the probability of concentration in ( 3 ) to be at least 1− ϵ̃ , where ϵ̃ = o ( ϵ ) . In particular , if we choose ϵ̃ = ϵ2 , then we set 1−O ( 1n ) ≥ 1− ϵ 2 , i.e . n ≥ Ω ( 1 ϵ ) 2 . ( 5 ) Our following analysis will be based on this condition . Meanwhile in order to have balanced performance in each epoch of coordinate descent , we require∣∣E [ ∂L∂b0 ] ∣∣ < ϵ̃2 . Since E [ ∂L∂b0 ] = 12 ( −E [ Z|ℓ ( x ) = 0 ] + E [ Z|ℓ ( x ) = 1 ] ) ) , we have∣∣E [ Z|ℓ ( x ) = 0 ] − E [ Z|ℓ ( x ) = 1 ] ∣∣ < ϵ̃ . ( 6 ) We have the following relation between µ0 and µ1 . In the following , σ represents the sigmoid function : σ ( x ) = 11+e−x . Proposition B.2 . If ∣∣E [ Z|ℓ ( x ) = 0 ] −E [ Z|ℓ ( x ) = 1 ] ∣∣ < ϵ̃ , then |σ ( −µ0 ) −σ ( µ1 ) | ≤ σ′ ( µ0− δ ) δ+ σ′ ( µ1 + δ ) δ + 3ϵ̃ , where δ is as shown in Corollary 4.2 . Proof . We have Z = { σ ( −∆ ) , ℓ ( x ) = 0 σ ( ∆ ) , ℓ ( x ) = 1 For ℓ ( x ) = 0 , by Lagrange mean value theorem , |σ ( −µ0 ) −Z| = |σ ( −µ0 ) − σ ( −∆ ) | = σ′ ( ξ ) ( ∆− µ0 ) , where ξ is between −µ0 and −∆ . By Corollary 4.2 and the condition of n , |∆− µ0| ≤ δ with probability ≥ 1 − ϵ̃ . From the remark following Corollary A.4 , we have σ′ ( ξ ) ≤ σ′ ( −µ0 + δ ) = σ′ ( µ0 − δ ) ,3 with probability ≥ 1− ϵ̃ . Then we have P [ |σ ( −µ0 ) − Z| ≤ σ′ ( µ0 − δ ) δ|ℓ ( x ) = 0 ] ≥ 1− ϵ̃ . Since E [ Z|ℓ ( x ) = 0 ] = E [ Z||σ ( −µ0 ) − Z| ≤ σ′ ( µ0 − δ ) δ ] P [ |σ ( −µ0 ) − Z| ≤ σ′ ( µ0 − δ ) δ ] + E [ Z||σ ( −µ0 ) − Z| > σ′ ( µ0 − δ ) δ ] P [ |σ ( −µ0 ) − Z| > σ′ ( µ0 − δ ) δ ] , 3σ′ ( x ) is even . then ( note that 0 < Z < 1 ) E [ Z|ℓ ( x ) = 0 ] ≤ σ ( −µ0 ) + σ′ ( µ0 − δ ) δ + ϵ̃ and E [ Z|ℓ ( x ) = 0 ] ≥ ( σ ( −µ0 ) − σ′ ( µ0 − δ ) δ ) ( 1− ϵ̃ ) , i.e . E [ Z|ℓ ( x ) = 0 ] − σ ( −µ0 ) ≤ σ′ ( µ0 − δ ) δ + ϵ̃ and E [ Z|ℓ ( x ) = 0 ] − σ ( −µ0 ) ≥ −σ′ ( µ0 − δ ) δ − ϵ̃ ( σ ( −µ0 ) − σ′ ( µ0 − δ ) δ ) = −σ′ ( µ0 − δ ) δ ( 1− ϵ̃ ) − ϵ̃σ ( −µ0 ) ≥ −σ′ ( µ0 − δ ) δ − ϵ̃ . So ∣∣E [ Z|ℓ ( x ) = 0 ] − σ ( −µ0 ) ∣∣ ≤ σ′ ( µ0 − δ ) δ + ϵ̃ . Similarly ∣∣E [ Z|ℓ ( x ) = 1 ] − σ ( µ1 ) ∣∣ ≤ σ′ ( µ1 + δ ) δ + ϵ̃ . By triangle inequality , ∣∣σ ( −µ0 ) − σ ( µ1 ) ∣∣ ≤ ∣∣σ ( −µ0 ) − E [ Z|ℓ ( x ) = 0 ] ∣∣+ ∣∣E [ Z|ℓ ( x ) = 0 ] − E [ Z|ℓ ( x ) = 1 ] ∣∣ + ∣∣E [ Z|ℓ ( x ) = 1 ] − σ ( µ1 ) ∣∣ ≤ σ′ ( µ0 − δ ) δ + σ′ ( µ1 + δ ) δ + 3ϵ̃ . From the proof above , we can directly obtain the following corollary about Z. Corollary B.3 . P [ ∣∣Z − E [ Z|ℓ ( x ) = 0 ] ∣∣ ≤ 2σ′ ( µ0 − δ ) δ + ϵ̃|ℓ ( x ) = 0 ] ≥ 1− ϵ̃ P [ ∣∣Z − E [ Z|ℓ ( x ) = 1 ] ∣∣ ≤ 2σ′ ( µ1 + δ ) δ + ϵ̃|ℓ ( x ) = 1 ] ≥ 1− ϵ̃ . In order to obtain the concentration of Z , we need to estimate σ′ ( µ0 − δ ) δ and σ′ ( µ1 + δ ) δ . The following proposition is based on the condition that |µ0 + µ1| ≥ 4δ . If |µ0 + µ1| < 4δ , set c : = µ0 − µ1 , we have µ0 > c2 − 2δ and µ1 < − c 2 + 2δ . Then the concentration of output shown in Corollary 4.2 can guarantee 1− ϵ accuracy of the model for any ϵ > 0 . In fact , from |∆− µ0| < δ , we have ∆ > µ0 − δ > c2 − 3δ > 0 , due to δ = o ( c ) . So P [ ∆ > 0|ℓ ( x ) = 0 ] ≥ P [ |∆− µ0| < δ|ℓ ( x ) = 0 ] ≥ 1− ϵ̃ . Similarly , P [ ∆ < 0|ℓ ( x ) = 1 ] ≥ P [ |∆− µ0| < δ|ℓ ( x ) = 1 ] ≥ 1− ϵ̃ . Since ϵ̃ = o ( ϵ ) , the model achieves overall accuracy ≥ 1− ϵ . Proposition B.4 . If |µ0 + µ1| ≥ 4δ , then σ′ ( µ0 − δ ) δ = O ( ϵ̃ ) , σ′ ( µ1 + δ ) δ = O ( ϵ̃ ) . Proof . First , we estimate the lower bound of |σ ( −µ0 ) − σ ( µ1 ) | via the Fundamental Theorem of Calculus . We have |σ ( −µ0 ) − σ ( µ1 ) | = ∣∣ ∫ µ1 −µ0 σ ′ ( t ) dt ∣∣ . If −µ0 < µ1 < 0 , since µ0 + µ1 ≥ 4δ , we divide the interval [ −µ0 , µ1 ] into [ −µ0 , −µ0 + 2δ ] ∪ [ −µ0 + 2δ , µ1 − 2δ ] ∪ [ µ1 − 2δ , µ1 ] and estimate the lower bound of the integral . Since σ′ ( x ) is increasing on ( −∞ , 0 ] , we have∫ µ1 −µ0 σ′ ( t ) dt ≥ σ′ ( −µ0 ) · 2δ + I1 + σ′ ( µ1 − 2δ ) · 2δ , ( 7 ) where I1 = ∫ µ1−2δ −µ0+2δ σ ′ ( t ) dt . If µ1 < −µ0 < 0 , similarly we have∫ −µ0 µ1 σ′ ( t ) dt ≥ σ′ ( µ1 ) · 2δ + I2 + σ′ ( −µ0 − 2δ ) · 2δ , ( 8 ) where I2 = ∫ −µ0−2δ µ1+2δ σ′ ( t ) dt . We have a uniform lower bound from ( 7 ) and ( 8 ) : ∣∣∣∣ ∫ µ1 −µ0 σ′ ( t ) dt ∣∣∣∣ ≥ σ′ ( −µ0 − 2δ ) · 2δ + σ′ ( µ1 − 2δ ) · 2δ + I , ( 9 ) where I = min { I1 , I2 } . Furthermore , by Proposition B.2 , |σ ( −µ0 ) − σ ( µ1 ) | ≤ σ′ ( µ0 − δ ) δ + σ′ ( µ1 + δ ) δ + 3ϵ̃ . ( 10 ) Combine ( 9 ) and ( 10 ) : 2σ′ ( −µ0 − 2δ ) δ + 2σ′ ( µ1 − 2δ ) δ ≤ σ′ ( −µ0 + δ ) δ + σ′ ( µ1 + δ ) δ + 3ϵ̃ . ( 11 ) By Lagrange mean value theorem , σ′ ( −µ0 − 2δ ) = σ′ ( −µ0 + δ ) − 3σ′′ ( ξ0 ) δ σ′ ( µ1 − 2δ ) = σ′ ( µ1 + δ ) − 3σ′′ ( ξ1 ) δ , where ξ0 ∈ ( −µ0 − 2δ , −µ0 + δ ) , ξ1 ∈ ( µ1 − 2δ , µ1 + δ ) . Plug these into ( 11 ) : σ′ ( µ0 − δ ) δ + σ′ ( µ1 + δ ) δ − 6δ2 ( σ′′ ( ξ0 ) + σ′′ ( ξ1 ) ) ≤ 3ϵ̃ . Since δ2 ( σ′′ ( ξ0 ) + σ′′ ( ξ1 ) ) = o ( σ′ ( µ0 − δ ) δ ) and o ( σ′ ( µ1 + δ ) δ ) , we have σ′ ( µ0 − δ ) δ = O ( ϵ̃ ) σ′ ( µ1 + δ ) δ = O ( ϵ̃ ) . Combine Proposition B.4 and Corollary B.3 , we have the following concentration of Z . Proposition B.5 . P [ ∣∣Z − E [ Z|ℓ ( x ) = 0 ] ∣∣ ≤ O ( ϵ̃ ) |ℓ ( x ) = 0 ] ≥ 1− ϵ̃ P [ ∣∣Z − E [ Z|ℓ ( x ) = 1 ] ∣∣ ≤ O ( ϵ̃ ) |ℓ ( x ) = 1 ] ≥ 1− ϵ̃ . Under the condition of balanced performance , we have the following corollary about the concentration of Z independent of the label of x. Corollary B.6 . If ∣∣E [ ∂L∂b0 ] ∣∣ ≤ ϵ̃2 , then P [ ∣∣Z − E [ Z ] ∣∣ ≤ O ( ϵ̃ ) ] ≥ 1− ϵ̃ . Proof . Since E [ ∂L∂b0 ] = 1 2 ( E [ Z|ℓ ( x ) = 0 ] − E [ Z|ℓ ( x ) = 1 ] ) , we have∣∣E [ Z|ℓ ( x ) = 0 ] − E [ Z|ℓ ( x ) = 1 ] ∣∣ ≤ ϵ̃ . On the other hand , E [ Z ] = 1 2 ( E [ Z|ℓ ( x ) = 0 ] + E [ Z|ℓ ( x ) = 1 ] ) . So we have ∣∣E [ Z ] − E [ Z|ℓ ( x ) = 0 ] ∣∣ ≤ ϵ̃2 . By Proposition B.5 , P [ |Z − E [ Z ] | ≤ O ( ϵ̃ ) ] ≥ 1− ϵ̃ . Now we can derive the estimation of the derivatives . Theorem B.7 ( concentration of derivatives ) . For loss on the whole graph L = L ( G ) , with probability ≥ 1−O ( 1n ) , we have 4 1 . If αi > α′i > 0 or αi > 0 > α ′ i , |αiα′i | ≥ p q ( 1 + log − 13 n ) , then∣∣∣∣ ∂L∂αi + ( βi − β′i ) λ2 ( p− q p+ q ) 2 E [ Z ] ∣∣∣∣ ≤ |βi − β′i|E [ Z ] O ( log− 12 n ) ( 12 ) ∣∣∣∣ ∂L∂α′i − ( βi − β′i ) λ2 ( p− q p+ q ) 2 E [ Z ] ∣∣∣∣ ≤ |βi − β′i|E [ Z ] O ( log− 12 n ) ( 13 ) ∣∣∣∣ ∂L∂βi + ( αi − α′i ) λ2 ( p− q p+ q ) 2 E [ Z ] ∣∣∣∣ ≤ |αi − α′i|E [ Z ] O ( log− 12 n ) . ( 14 ) 4Since ∂L ∂β′i = − ∂L ∂βi ( see Theorem B.1 ) , we only need to estimate ∂L ∂βi . 2 . If αi > 0 > α′i , |αiα′i | ∈ [ q p ( 1 + γ ) , p q ( 1 − log − 13 n ) ] , where γ ∈ [ log− 1 3 n , ( pq ) 2 ( 1 − log− 1 3 n ) − 1 ] , then∣∣∣∣ ∂L∂αi + ( βi − β′i ) λp ( p− q ) 2 ( p+ q ) 2 E [ Z ] ∣∣∣∣ ≤ |βi − β′i|E [ Z ] O ( log− 12 n ) ( 15 ) ∣∣∣∣ ∂L∂α′i + ( βi − β′i ) λq ( p− q ) 2 ( p+ q ) 2 E [ Z ] ∣∣∣∣ ≤ |βi − β′i|E [ Z ] O ( log− 12 n ) ( 16 ) ∣∣∣∣ ∂L∂βi + λ ( p− q ) ( pαi + qα ′ i ) 2 ( p+ q ) 2 E [ Z ] ∣∣∣∣ ≤ |αi − α′i|E [ Z ] O ( log− 12 n ) . ( 17 ) 3 . If αi > 0 > α′i , |αiα′i | ∈ ( p q ( 1− log − 13 n ) , pq ( 1 + log − 13 n ) ) , then ∂L ∂βi ∈ [ − ( αi − α′i ) E [ Z ] ( λ ( p− q ) Λ1 2 ( p+ q ) 2 +O ( log− 1 2 n ) ) , − ( αi − α′i ) E [ Z ] ( λ ( p− q ) ( Λ3 − Λ2 ) 2 ( p+ q ) 2 −O ( log− 1 2 n ) ) ] , ( 18 ) where Λ1 = ( 1+log− 1 3 n ) p2−q2 ( 1+log− 1 3 n ) p+q , Λ2 = pq log− 1 3 n ( 1+log− 1 3 n ) p+q and Λ3 = ( 1−log− 1 3 n ) p2−q2 ( 1−log− 1 3 n ) p+q ; • if βi > β′i , ∂L ∂αi ∈ [ − ( βi − β′i ) E [ Z ] ( λp ( p− q ) 2 ( p+ q ) 2 +O ( log− 1 2 n ) ) , − ( βi − β′i ) E [ Z ] ( λ 2 ( p− q p+ q ) 2 −O ( log− 1 2 n ) ) ] ( 19 ) ∂L ∂α′i ∈ [ − ( βi − β′i ) E [ Z ] ( λq ( p− q ) 2 ( p+ q ) 2 +O ( log− 1 2 n ) ) , ( βi − β′i ) E [ Z ] ( λ 2 ( p− q p+ q ) 2 +O ( log− 1 2 n ) ) ] ( 20 ) ∂L ∂αi − ∂L ∂α′i ∈ [ − ( βi − β′i ) E [ Z ] ( λ ( p− q p+ q ) 2 +O ( log− 1 2 n ) ) , − ( βi − β′i ) E [ Z ] ( λ 2 ( p− q p+ q ) 2 −O ( log− 1 2 n ) ) ] , ( 21 ) • if βi ≤ β′i , ∂L ∂αi ∈ [ − ( βi − β′i ) E [ Z ] ( λ 2 ( p− q p+ q ) 2 −O ( log− 1 2 n ) ) , − ( βi − β′i ) E [ Z ] ( λp ( p− q ) 2 ( p+ q ) 2 +O ( log− 1 2 n ) ) ] ( 22 ) ∂L ∂α′i ∈ [ − ( βi − β′i ) E [ Z ] ( λ 2 ( p− q p+ q ) 2 +O ( log− 1 2 n ) ) , ( βi − β′i ) E [ Z ] ( λq ( p− q ) 2 ( p+ q ) 2 +O ( log− 1 2 n ) ) ] ( 23 ) ∂L ∂αi − ∂L ∂α′i ∈ [ − ( βi − β′i ) E [ Z ] ( λ 2 ( p− q p+ q ) 2 −O ( log− 1 2 n ) ) , − ( βi − β′i ) E [ Z ] ( λ ( p− q p+ q ) 2 +O ( log− 1 2 n ) ) ] ( 24 ) 4 . If αi > 0 > α′i , ∣∣αi α′i ∣∣ ≤ qp ( 1 + log− 13 n ) , βi < β′i , then ∂L ∂αi − ∂L ∂α′i ≥ −|βi − β′i|O ( ϵ̃ ) ( 25 ) ∂L ∂βi ≤ O ( ϵ̃ ) . ( 26 ) Proof . We show the proof for item 1 , other items can be proved similarly . Since L ( G ) is the average of the losses over revealed vertices , we first show the concentration of ∂L ( x ) ∂αi , then we show the concentration of ∂L ( G ) ∂αi using union bound . Since ∂L ( x ) ∂αi = ( −1 ) 1−ℓ ( x ) 4 ( βi − β′i ) Z ∑ y 1 [ y ∼ x ] 1 [ αity0 + α′it y 1 ≥ 0 ] t y 0 n2 ( p+ q ) 2 , we first show the concentration of Y : = ( −1 ) 1−ℓ ( x ) ∑ y∼x 41 [ αit y 0+α ′ it y 1≥0 ] t y 0 n2 ( p+q ) 2 using the method of averaged bounded difference . Similar as the proof of Theorem A.1 , let Yj = ( −1 ) 1−ℓ ( x ) 41 [ αit yj 0 +α ′ it yj 1 ≥0 ] t yj 0 n2 ( p+q ) 2 . Based on Condition ( Cond ) , for ℓ ( x ) = 0 , |Yj + 2λp n ( p+q ) 2 | ≤ O ( log− 7 2 n ) for ℓ ( yj ) = 0 . Similar results hold for ℓ ( yj ) = 1 , ℓ ( x ) = 1 . So for any ak , a′k , ∣∣∣∣E [ ∑ j Yj |Y1 , · · · , Yk−1 , Yk = ak ] − E [ ∑ j Yj |Y1 , · · · , Yk−1 , Yk = a′k ] ∣∣∣∣ ≤ ( αi − α′i ) O ( log− 72 n ) . By method of averaged bounded difference , for ℓ ( x ) = 0 , P [ ∣∣∣∣ ∑ ℓ ( yj ) =0 Yj + λ ( p p+ q ) 2∣∣∣∣ ≤ O ( log− 12 n ) ] ≥ 1− exp ( −2 log3 n ) ≥ 1− 1n2 . Similarly P [ ∣∣∣∣ ∑ ℓ ( yj ) =1 Yj + λ ( q p+ q ) 2∣∣∣∣ ≤ O ( log− 12 n ) ] ≥ 1− 1n2 . Hence P [ ∣∣∣∣Y + λ ( p2 + q2 ) ( p+ q ) 2 ∣∣∣∣ ≤ O ( log− 12 n ) ] ≥ 1− 1n2 . By Corollary B.6 , P [ |Z − E [ Z ] | ≤ O ( ϵ̃ ) ] ≥ 1− ϵ̃ , so we have P [ ∣∣∣∣∂L ( x ) ∂αi + ( βi − β′i ) λ p 2 + q2 ( p+ q ) 2 E [ Z ] ∣∣∣∣ ≤ |βi − β′i|E [ Z ] O ( log− 12 n ) |ℓ ( x ) = 0 ] ≥ 1−O ( 1n2 ) . For ℓ ( x ) = 1 , similarly we have P [ ∣∣∣∣∂L ( x ) ∂αi − ( βi − β′i ) λ 2pq ( p+ q ) 2E [ Z ] ∣∣∣∣ ≤ |βi − β′i|E [ Z ] O ( log− 12 n ) |ℓ ( x ) = 0 ] ≥ 1−O ( 1n2 ) . By union bound , we have ( 12 ) . ( 13 ) and ( 14 ) can be proved similarly . Using Theorem B.7 , we can analyze dynamics of neurons of each type . First , we introduce some notations . Let ηk denote the learning rate at the k-th epoch , Z ( k ) be the value of Z at the k-th epoch , α ( k ) i be the value of αi at the k-th epoch , similar for α ′ ( k ) i , β ( k ) i and β ′ ( k ) i . In particular , α ( 0 ) i , α ′ ( 0 ) i , β ( 0 ) i and β ′ ( 0 ) i represent the values at initialization . B.1 “ GOOD TYPE ” NEURONS In this section , we show that “ good type ” neurons stay in the “ good type ” regime throughout coordinate descent ( Theorem B.8 ) using Theorem B.7 . Theorem B.8 . “ Good type ” neurons are preserved in the “ good type ” throughout coordinate descent with probability ≥ 1−O ( 1n2 ) over the SBM randomness . Proof . As shown in Section 4 , “ good type ” regime is composed of ( G1 ) and ( G2 ) , we show the dynamics of neurons in ( G1 ) and ( G2 ) respectively . Assume that neuron ( α ( k ) i , α ′ ( k ) i , β ( k ) i , β ′ ( k ) i ) is in ( G1 ) , we show that it either stays in ( G1 ) or moves into ( G2 ) throughout coordinate descent . In fact , by ( 14 ) , with probability ≥ 1− O ( 1n2 ) , ∂L ∂β ( k ) i < 0 < ∂L ∂β ′ ( k ) i , so β ( k+1 ) i > β ( k ) i , β ′ ( k+1 ) i < β ′ ( k ) i and hence β ( k+1 ) i − β ′ ( k+1 ) i > β ( k ) i − β ′ ( k ) i > 0 . By ( 12 ) and ( 13 ) , ∂L ∂α ( k ) i < 0 < ∂L ∂α ′ ( k ) i , so α ( k+1 ) i > α ( k ) i , α ′ ( k+1 ) i < α ′ ( k ) i . If α ′ ( k+1 ) i > 0 , this neuron stays in ( G1 ) . If α ′ ( k+1 ) i < 0 , since∣∣∣∣ α ( k+1 ) i α ′ ( k+1 ) i ∣∣∣∣ = ∣∣∣∣∣ α ( k ) i − ηk ∂L∂α ( k ) i α ′ ( k ) i − ηk ∂L∂α′ ( k ) i ∣∣∣∣∣ > 1 , the neuron moves into ( G2 ) . Assume that neuron is in ( G2 ) , we also show that it either moves into ( G1 ) or stays in ( G2 ) . As shown in section 3.2 , ( G2 ) = ( G2,1 ) ∪ ( G2,2 ) ∪ ( G2,3 ) . If the neuron is in ( G2,1 ) , again by ( 12 ) , ( 13 ) and ( 14 ) , α ( k+1 ) i > α ( k ) i > 0 > α ′ ( k ) i > α ′ ( k+1 ) i , ∣∣ α ( k+1 ) i α ′ ( k+1 ) i ∣∣ > 1 , β ( k+1 ) i > β ( k ) i > β′ ( k ) i > β′ ( k+1 ) i , so the neuron stays in G2,2 . If the neuron is in G2,2 , by ( 15 ) , ( 16 ) and ( 17 ) , ∂L ∂β ( k ) i < 0 < ∂L ∂β ′ ( k ) i , so β ( k+1 ) i > β ′ ( k+1 ) i . Also , ∂L ∂α ( k ) i < ∂L ∂α ′ ( k ) i < 0 , so α ( k+1 ) i > α ′ ( k+1 ) i , ∣∣ α ( k+1 ) i α ′ ( k+1 ) i ∣∣ > ∣∣ α ( k ) i α ′ ( k ) i ∣∣ > 1 . If α′ ( k+1 ) i < 0 , the neuron stays in G2 . If α ′ ( k+1 ) i > 0 , it moves into G1 . If the neuron is in G2,3 , by ( 18 ) and ( 21 ) , ∂L ∂β ( k ) i < 0 < ∂L ∂β ′ ( k ) i , ∂L ∂α ( k ) i − ∂L ∂α ′ ( k ) i < 0 , so β ( k+1 ) i > β ( k ) i > β ′ ( k ) i > β ′ ( k+1 ) i , α ( k+1 ) i − α ′ ( k+1 ) i > α ( k ) i − α ′ ( k ) i > 0 . By ( 19 ) and ( 20 ) , ∂L ∂αi ≤ − ( βi − β′i ) E [ Z ] ( λ 2 ( p− q p+ q ) 2 −O ( log− 1 2 n ) ) , ∂L ∂α′i ≤ ( βi − β′i ) E [ Z ] ( λ 2 ( p− q p+ q ) 2 +O ( log− 1 2 n ) ) . Similar as in ( G2,2 ) , if α ′ ( k+1 ) i < 0 , the neuron stays in ( G2 ) . If α ′ ( k+1 ) i > 0 , it moves into ( G1 ) . B.2 “ BAD TYPE ” NEURONS As shown in Section 4 , neurons of “ bad type ” consist of two cases : B1 and B2 , where B2 = B2,1∪ B2,2 ∪ B2,3 ∪ B3 . Since the output in B3 is concentrated at 0 ( see Theorem A.2 ) , we don ’ t need to worry if neurons move into this region . Neurons in B1 ∪ B2,1 ∪ B2,2 ∪ B2,3 might exit “ bad type ” regime and become “ harmless ” or “ good ” ( if the neuron becomes order-aligned ) , which will do no harm to the performance of the model . If they stay in B1 ∪ B2,1 ∪ B2,2 ∪ B2,3 , the following theorem shows that the separation mi0 −mi1 can be upper bounded by initialization . In fact , Theorem A.4 shows that mi0 −mi1 is proportional to |αi − α′i||βi − β′i| . The next theorem shows that both |αi−α′i| and |βi−β′i| shrink throughout coordinate descent . The worst situation is that the magnitude of |αi − α′i| and |βi − β′i| of neurons in B3 increase and move into B1 or B2 at certain epoch . From Theorem B.7 we see that the magnitude can only increase by a limited rate ( we can see this more explicitly in Theorem 6.2 ) . Theorem B.9 . If ( α ( k ) i , α ′ ( k ) i , β ( k ) i , β ′ ( k ) i ) is in B1 ∪ B2,1 ∪ B2,2 ∪ B2,3 then with probability ≥ 1−O ( 1n2 ) over the SBM randomness , ∣∣α ( k+1 ) i −α′ ( k+1 ) i ∣∣ ≤ ∣∣α ( k ) i −α′ ( k ) i ∣∣ , ∣∣β ( k+1 ) i −β′ ( k+1 ) i ∣∣ ≤∣∣β ( k ) i − β′ ( k ) i ∣∣ . Proof . In B1 and B2,1 , by ( 12 ) and ( 13 ) , ∂L ∂α ( k ) i > 0 > ∂L ∂α ′ ( k ) i , then α ( k+1 ) i < α ( k ) i , α ′ ( k+1 ) i > α ′ ( k ) i , so |α ( k+1 ) i − α ′ ( k+1 ) i | ≤ |α ( k ) i − α ′ ( k ) i | . Similarly , by ( 14 ) , ∂L∂β ( k ) i = − ∂L ∂β ′ ( k ) i < 0 , so |β ( k+1 ) i − β ′ ( k+1 ) i | ≤ |β ( k ) i − β ′ ( k ) i | ( Note that α ( k ) i > α ′ ( k ) i , β ( k ) i < β ′ ( k ) i ) . In B2,2 , from ( 15 ) and ( 16 ) , we have ∂L ∂α ( k ) i > ∂L ∂α ′ ( k ) i > 0 , so |α ( k+1 ) i − α ′ ( k+1 ) i | ≤ |α ( k ) i − α ′ ( k ) i | . On the other hand , ∂L ∂β ( k ) i < 0 < ∂L ∂β ′ ( k ) i , so β ( k+1 ) i > β ( k ) i , β ′ ( k+1 ) i < β ′ ( k ) i and |β ( k+1 ) i − β ′ ( k+1 ) i | ≤ |β ( k ) i − β ′ ( k ) i | . In B2,3 , by ( 24 ) , ∂L∂α ( k ) i − ∂L ∂α ′ ( k ) i > 0 , so |α ( k+1 ) i −α ′ ( k+1 ) i | ≤ |α ( k ) i −α ′ ( k ) i | . By ( 18 ) , ∂L ∂β ( k ) i < 0 < ∂L ∂β ′ ( k ) i , so |β ( k+1 ) i − β ′ ( k+1 ) i | ≤ |β ( k ) i − β ′ ( k ) i | . B.3 “ HARMLESS TYPE ” NEURONS Section 4 shows that there are two cases of “ harmless type ” : H1 and H2 . For neurons in H1 , the derivatives of parameters are estimated in ( 15 ) , ( 16 ) and ( 17 ) ( same as in G2,2 ) . We can have similar analysis as in G2,2 and show that the inequality αi > 0 > α′i , βi > β ′ i can be preserved . Moreover∣∣αi α′i ∣∣ increases . So the neurons either stay in H1 or become “ good type ” if ∣∣αiα′i ∣∣ > 1 . In particular , neurons in H1 do no harm to the performance of the model . For neurons in H2 , 1 [ αit y 0 + α ′ it y 1 ≥ 0 ] = 0 , so the derivatives are all equal to 0 . Therefore they are never updated . Meanwhile they don ’ t affect the performance of the model since ϕ ( αit y 0 + α ′ it y 1 ) = 0 and ∆i = 0 . C LEARNING GUARANTEE In this section , we prove Theorem 6.1 , 6.2 and Lemma 6.3 . Proof of Theorem 6.1 . We prove by contradiction . Suppose P [ ∆ < 0|ℓ ( x ) = 0 ] ≥ 4ϵ , ( 27 ) then E [ Z|ℓ ( x ) = 0 ] = E [ Z|ℓ ( x ) = 0 , ∆ < 0 ] P [ ∆ < 0|ℓ ( x ) = 0 ] + E [ Z|ℓ ( x ) = 0 , ∆ ≥ 0 ] P [ ∆ ≥ 0|ℓ ( x ) = 0 ] ≥ 1 2 · 4ϵ = 2ϵ . ( 28 ) Furthermore , we claim that µ0 < δ . In fact , if µ0 ≥ δ , since P [ |∆− µ0| ≤ δ|ℓ ( x ) = 0 ] ≥ 1− ϵ by Corollary 4.2 , and ∆ ≥ µ0 − δ ≥ 0 , we have P [ ∆ ≥ 0|ℓ ( x ) = 0 ] ≥ P [ |∆− µ0| ≤ δ|ℓ ( x ) = 0 ] ≥ 1− ϵ , i.e . P [ ∆ < 0|ℓ ( x ) = 0 ] ≤ ϵ , which contradicts ( 27 ) . Let c : = µ0 − µ1 , then µ1 = µ0 − c < δ − c. Again , by Corollary 4.2 , for ℓ ( x ) = 1 , ∆ < µ1 + δ with probability ≥ 1− ϵ , we have Z = σ ( ∆ ) < σ ( µ1 + δ ) < σ ( −c+ 2δ ) . Then E [ Z|ℓ ( x ) = 1 ] = E [ Z|ℓ ( x ) = 1 , |∆− µ1| < δ ] P [ |∆− µ1| < δ|ℓ ( x ) = 1 ] + E [ Z|ℓ ( x ) = 1 , |∆− µ1| ≥ δ ] P [ |∆− µ1| ≥ δ|ℓ ( x ) = 1 ] < σ ( −c+ 2δ ) · 1 + 1 · ϵ < σ ( − c 2 ) + ϵ . The last step is due to δ = o ( c ) . Since σ ( − c2 ) < ϵ 2 , E [ Z|ℓ ( x ) = 1 ] < 3ϵ 2 . Combine with ( 28 ) , ∣∣E [ Z|ℓ ( x ) = 0 ] − E [ Z|ℓ ( x ) = 1 ] ∣∣ > ϵ 2 . On the other hand , ∣∣E [ ∂L∂b0 ] ∣∣ < ϵ4 implies∣∣E [ Z|ℓ ( x ) = 0 ] − E [ Z|ℓ ( x ) = 1 ] ∣∣ < ϵ 2 , which is a contradiction . So P [ ∆ < 0|ℓ ( x ) = 0 ] < 4ϵ . Similarly , P [ ∆ > 0|ℓ ( x ) = 1 ] < 4ϵ . Proof of Theorem 6.2 . If the i-th neuron is of “ good type ” , from Corollary A.4 , we find a uniform lower bound of mi0 −mi1 in “ good type ” regimes . We have min { p−q 2 , Λ3 } = p−q 2 . Next we estimate αi − α′i and βi − β′i . Let A ( k ) i : = α ( k ) i − α ′ ( k ) i , B ( k ) i : = β ( k ) i − β ′ ( k ) i . We have A ( k ) i = α ( k ) i − α ′ ( k ) i = α ( k−1 ) i − α ′ ( k−1 ) i − ηk ( ∂L ∂α ( k−1 ) i − ∂L ∂α ′ ( k−1 ) i ) B ( k ) i = β ( k ) i − β ′ ( k ) i = β ( k−1 ) i − β ′ ( k−1 ) i − ηk ( ∂L ∂β ( k−1 ) i − ∂L ∂β ′ ( k−1 ) i ) , By Theorem B.7 , in G1 and G2,1 , with probability ≥ 1−O ( 1n ) , ∂L ∂αi − ∂L ∂α′i ≤ − ( βi − β′i ) E [ Z ] ( ( p− q p+ q ) 2 λ−O ( log− 1 2 n ) ) ) ≤ − ( βi − β′i ) E [ Z ] λ 2 ( p− q p+ q ) 2 ∂L ∂βi − ∂L ∂β′i ≤ − ( αi − α′i ) E [ Z ] ( ( p− q p+ q ) 2 λ−O ( log− 1 2 n ) ) ≤ − ( αi − α′i ) E [ Z ] λ 2 ( p− q p+ q ) 2 so A ( k ) i = A ( k−1 ) i − ηk ( ∂L ∂α ( k−1 ) i − ∂L ∂α ′ ( k−1 ) i ) ≥ A ( k−1 ) i + ηkE [ Z ( k ) ] λ 2 ( p− q p+ q ) 2 B ( k−1 ) i = A ( k−1 ) i + λ 2 ( p− q p+ q ) 2 B ( k−1 ) i B ( k ) i = B ( k−1 ) i − ηk ( ∂L ∂β ( k−1 ) i − ∂L ∂β ′ ( k−1 ) i ) ≥ B ( k−1 ) i + ηkE [ Z ( k ) ] λ 2 ( p− q p+ q ) 2 A ( k−1 ) i = B ( k−1 ) i + λ 2 ( p− q p+ q ) 2 A ( k−1 ) i . In matrix form : ( A ( k ) i B ( k ) i ) ⪰ ( 1 λ2 ( p−q p+q ) 2 λ 2 ( p−q p+q ) 2 1 ) ( A ( k−1 ) i B ( k−1 ) i ) ( 29 ) Similarly , in G2,2 : ( A ( k ) i B ( k ) i ) ⪰ ( 1 λ4 ( p−q p+q ) 2 λ 8 ( p−q p+q ) 2 1 ) ( A ( k−1 ) i B ( k−1 ) i ) ( 30 ) in G2,3 : ( A ( k ) i B ( k ) i ) ⪰ ( 1 λ4 ( p−q p+q ) 2 λ ( Λ3−Λ2 ) 2 ( p−q ) ( p−q p+q ) 2 1 ) ( A ( k−1 ) i B ( k−1 ) i ) ( 31 ) where Λ2 = pq log − 1 3 n ( 1+log− 1 3 n ) p+q , Λ3 = ( 1−log− 1 3 n ) p2−q2 ( 1−log− 1 3 n ) p+q . A uniform relation among ( 29 ) , ( 30 ) and ( 31 ) can be given by ( 30 ) . By eigenvalue decomposition , we have A ( k ) i B ( k ) i ≥ 1 4 ( 1 + √ 2λ 8 ( p− q p+ q ) 2 ) 2k ( 2− 1 8A ( 0 ) i + 2 1 8B ( 0 ) i ) 2 ≥ A ( 0 ) i B ( 0 ) i ( 1 + √ 2λ 8 ( p− q p+ q ) 2 ) 2k . Therefore we have a uniform lower bound of mi0 −mi1 at the k-th epoch in “ good type ” regime : mi0 −mi1 ≥ A ( 0 ) i B ( 0 ) i λ 2 ( p− q p+ q ) 2 ( 1 + √ 2λ 8 ( p− q p+ q ) 2 ) 2k . Next we consider the “ bad type ” regime . By Corollary A.4 , we have lower bound of mi0 −mi1 in B1 , B2 and B3 respectively . By Theorem B.9 , in B1 and B2 , |αi − α′i| and |βi − β′i| shrink . Moreover , since Λ1 > Λ35 , we have a uniform lower bound of mi0 −mi1 in B1 and B2 : mi0 −mi1 ≥ − ∣∣A ( 0 ) i B ( 0 ) i ∣∣λ ( p− q ) Λ1 ( p+ q ) 2 ≥ −∣∣A ( 0 ) i B ( 0 ) i ∣∣λp ( p− q ) ( p+ q ) 2 , since Λ1 = ( 1+log− 1 3 n ) p2−q2 ( 1+log− 1 3 n ) p+q ≤ 2p 2−q2 2p+q ≤ p. Next we show that |αi − α′i| and |βi − β′i| can only increase by a limited rate in B3 . From item 4 of Theorem B.7 , we have ∂L ∂αi − ∂L ∂α′i ≥ −|βi − β′i|O ( ϵ̃ ) ∂L ∂βi − ∂L ∂β′i = 2 ∂L ∂βi ≤ O ( ϵ̃ ) . Therefore ( note that βi < β′i ) A ( k ) i ≤ A ( k−1 ) i + ηk|B ( k−1 ) i |O ( ϵ̃ ) |B ( k ) i | ≤ |B ( k−1 ) i |+ ηkO ( ϵ̃ ) . Since E [ Z ( k ) ] ≥ Ω ( ϵ ) 6 , and ϵ̃ = o ( ϵ ) , ϵ2 = O ( 1n ) , so ηkO ( ϵ̃ ) ≤ ηkE [ Z ( k ) ] 1n = 1 n . Suppose A ( k ) i ≥ O ( 1 ) , otherwise , A ( k ) i and |B ( k ) i | increase by an even smaller rate . So we have ( A ( k ) i |B ( k ) i | ) ⪯ ( 1 1n 1 n 1 ) ( A ( k−1 ) i |B ( k−1 ) i | ) . By eigenvalue decomposition , we have |A ( k ) i B ( k ) i | ≤ 1 2 ( 1 + 1 n ) 2k ( |A ( 0 ) i |+ |B ( 0 ) i | ) 2 ≤ k ( ( A ( 0 ) i ) 2 + ( B ( 0 ) i ) 2 ) ( n → ∞ ) . We obtained the result for “ harmless type ” neurons directly from Corollary 4.3 . Proof of Lemma 6.3 . Since all parameters are independent standard normal random variables , we have E [ ( α− α′ ) 2 ] = E [ ( β − β′ ) 2 ] = 2 , Var [ ( α− α′ ) 2 ] = Var [ ( β − β′ ) 2 ] = 8 . By Chebyshev ’ s inequality we have P [ h∑ i=1 ( αi − α′i ) 2 + ( βi − β′i ) 2 ≤ 5h ] ≥ 1−O ( 1 h ) . 5 xp2−q2 xp+q is monotonically increasing . 6Otherwise , the model already achieves high accuracy , see the proof of Theorem 2.2 For neurons initialized as “ good type ” , we have E [ α− α′|α > α′ , α+ α′ > 0 ] = 2√ π Var [ α− α′|α > α′ , α+ α′ > 0 ] = 2− 1 4π E [ β − β′|β > β′ ] = 1√ π Var [ β − β′|β > β′ ] = 2− 1 π . Let ρ denote the probability that a neuron is initialized as “ good type ” . By G1 , G2 and symmetry , ρ = 2P [ α > α′ , α+ α′ > 0 , β > β′ ] . Since P [ α > α′ , α+ α′ ] = 1 4 , P [ β > β′ ] = 1 2 , we have ρ = 14 . By Chernoff bound , P [ hg ≥ ρ 2h ] ≥ 1−exp ( − ρ2 4 h ) , so P [ hg ≥ h 8 ] ≥ 1−exp ( − h 64 ) . Also by Chebyshev ’ s inequality , P [ ∑ the i-th neuron initialized as “ good type ” |αi − α′i||βi − β′i| ≥ hg ( 1 2π − k ) ∣∣hg ≥ h 8 ] ≥ 1− 4− 14π2 hgk2 . Set k = 110π , P [ ∑ the i-th neuron initialized as “ good type ” |αi − α′i||βi − β′i| ≥ h 80 ∣∣hg ≥ h 8 ] ≥ 1−O ( 1 h ) . So we have P [ ∑ the i-th neuron initialized as “ good type ” |αi − α′i||βi − β′i| ≥ h 80 ] ≥ P [ ∑ the i-th neuron initialized as “ good type ” |αi − α′i||βi − β′i| ≥ h 80 ∣∣hg ≥ h 8 ] P [ hg ≥ h 8 ] ≥ ( 1−O ( 1 h ) ) ( 1− exp ( − h 64 ) ) ≥ 1−O ( 1 h ) . D EXPERIMENTS ON DYNAMICS OF HIDDEN NEURONS This experiment verifies our argument in Sections B.1 , B.2 , B.3 and Theorem 6.2 about the dynamics of hidden neurons . We set h = 5 , λ = 0.3 and train the model on graphs sampled from SBM with n = 1000 , a = 1.0 , b = 0.7 . The plot of accuracy and its distribution can be seen in Section 7 . Here we plot the dynamics of all the 5 hidden neurons in Figure 4 , with each row corresponding to one hidden neuron . In each plot , x-axis represents epoch and y-axis represents the value of neurons . The first column depicts αi and α′i , the second column |αiα′i | , the third column |αi −α ′ i| , the fourth column βi , β ′ i and the last column |βi − β′i| . As shown in the figure , the first , second and fourth neurons are of “ good '' type satisfying ( G2 ) . Throughout training these neurons are preserved as “ good '' type : they ’ re order-aligned , |αiα′i | is lowered bounded by 1 , and both |αi − α ′ | , |βi − β ′ i| keeps increasing . All of these verify our argument in B.1 . The third neuron is “ harmless '' satisfying ( H2 ) . As shown in B.3 , this neuron isn ’ t updated and doesn ’ t make contribution to the output . The fifth neuron is of “ bad '' type satisfying ( B2 ) . Although |αi −α′i| and |βi − β′i| increase , but by comparing with the first , second and fourth row ( “ good '' neurons ) , they increase at a much smaller . This verifies our result in Theorem 6.2 . E TABLE OF NOTATIONS We list the notations used in this paper for readers ’ convenience . Notation Definition n number of vertices in a graph p probability of intra-community connection q probability of cross-community connection a parameter for p with p = a log 3 n n b parameter for q with q = b log 3 n n λ probability of revealing the label of a vertex ℓ ( x ) label of vertex x A adjacency matrix of a graph  normalized adjacency matrix with self loop  = 2n ( p+q ) ( A+ I ) X input feature of a graph W ( 0 ) trainable weights in the first layer of GCN W ( 1 ) trainable weights in the second layer of GCN B bias matrix of GCN with each row of B being [ b0 , b1 ] b0 bias in the first component b1 bias in the second component h number of hidden features f0 logit in the first component without bias f1 logit in the second component without bias g0 logit in the first component , g0 = f0 + b0 g1 logit in the second component , g1 = f1 + b1 ∆ difference between logit , ∆ = g0 − g1 = f0 − f1 + b0 − b1 | In this paper, the authors present the first provable guarantees for Graph Convolutional Networks (GCNs) for semi-supervised community detection tasks. The authors demonstrate that with high probability over the initialization and training data used, a GCN will efficiently learn to detect communities on graphs drawn from a stochastic block model (SBM). Empirical results demonstrate the efficacy of their results. | SP:81873659b4e2f8f74ba812fd6444493ec1aea934 |
NODEAttack: Adversarial Attack on the Energy Consumption of Neural ODEs | Recently , Neural ODE ( Ordinary Differential Equation ) models have been proposed , which use ordinary differential equation solving to predict the output of neural network . Due to the low memory usage , Neural ODE models can be considered as an alternative that can be deployed in resource-constrained devices ( e.g. , IoT devices , mobile devices ) . However , to deploy a Deep Learning model in resource-constrained devices , low inference energy cost is also required along with low memory cost . Unlike the memory cost , the energy consumption of the Neural ODEs during inference can be adaptive because of the adaptive nature of the ODE solvers . Attackers can leverage the adaptive behaviour of Neural ODEs to attack the energy consumption of Neural ODEs . However , energy-based attack scenarios have not been explored against Neural ODEs . To show the vulnerability of Neural ODEs against adversarial energy-based attack , we propose NODEAttack . The objective of NODEAttack is to generate adversarial inputs that require more ODE solvers computations , therefore increasing neural ODEs inference-time energy consumption . Our extensive evaluation on two datasets and two popular ODE solvers show that the samples generated through NODEAttack can increase up to 168 % energy consumption than average energy consumption of benign test data during inference time . Our evaluation also shows the attack transferability is feasible across solvers and architectures . Also , we perform a case study showing the impact of the generated adversarial examples , which shows that NODEAttack generated adversarial examples can decrease 50 % efficiency of an object-recognition-based mobile application . 1 INTRODUCTION . Deep Neural Networks ( DNNs ) have shown great potential in many challenging tasks ( image classification , natural language process , and playing games ) . To cope with tasks with higher complexity , the number of DNN parameters is increasing rapidly . Because of this reason , DNNs require considerable memory usage both in training and inference . To address the issue of increased memory usage , researchers simulate the solver of ordinary differential equation ( ODE ) and propose Neural ODE techniques ( Chen et al. , 2018 ) . Neural ODE does not store any intermediate quantities of the forward pass and allows us to train DNNs with constant memory cost . Neural ODE also performs better than traditional DNNs for irregularly sampled time series data . Because of decreased memory cost , Neural ODEs are viable options to be used in resource-constrained devices like mobile devices or UAVs ( Unmanned Aerial Vehicle ) . Due to the adaptive energy consumption of Neural ODE models ( Section 3 ) , the model robustness in terms of energy consumption or energy robustness ( Defined in 4.1 ) of the model needs to be investigated to deploy Neural ODE models in resource-constrained devices . Otherwise , lack of energy robustness in Neural ODEs can lead to tragic situations . For example , we assume that Neural ODE model is deployed for mobile apps , which are used to help visually impaired people . The energy consumption of the model is not robust , then the battery of the mobile device will be drained faster . This can be fatal for the visually impaired person . To avoid such scenarios , evaluating energy robustness of Neural ODEs is required to avoid unwanted incidents . Although energy robustness of Neural ODEs has not been explored , unlike accuracy-based robustness . Recent work shows that Neural ODE models are more robust against accuracy-based adversarial attacks ( Yan et al . ( 2019 ) ) than traditional DNNs . However , finding a relationship between input and energy consumption is more challenging because the relation between input and energy consumption of DNNs is not well-defined . To explore the energy robustness of Neural ODEs , the relationship between input and energy consumption of Neural ODEs needs to be defined . Recent works , ILFO ( Haque et al . ( 2020 ) ) and DeepSloth ( Hong et al . ( 2020 ) ) , have evaluated energy robustness of Adaptive Neural Networks ( AdNNs ) by proposing white-box attack . However , optimizing the loss function proposed by the aforementioned AdNN attacks can not evaluate the energy robustness of Neural ODEs . AdNNs activate or deactivate certain components of DNNs based on the intermediate outputs of certain computing units and consumes a different amount of energy based on different inputs . Both the attack ’ s objective is to increase the number of activated DNN components by modifying the specific computing unit outputs , and both attack use specific loss function optimization to achieve that . However , Neural ODE functionality is different than traditional AdNN functionality because , for Neural ODE , no component is deactivated or activated during inference . The adaptive behavior of a Neural ODE model depends on the adaptive ODE solver used to predict the output . Furthermore , for a specific trained Neural ODE model , we can find variable energy consumption for single input depending on the type of ODE solver used , where for traditional AdNNs , energy consumption based on a specific input will be the same always for a specific trained AdNN . Therefore , a novel approach is needed to explore the energy robustness of Neural ODEs . To explore the energy robustness of Neural ODEs , We propose NODEAttack , a white-box approach that uses step-size of the ODE solvers to formulate attack . ODE solvers use an iterative way to approximate a function , and the objective of our approach is to increase the number of iterations , increasing the energy consumption of Neural ODEs . Our attack formulation is based on the fact that decreasing step-size would increase the number of iterations . Specifically , we develop two attack techniques to evaluate Neural ODE ’ s energy robustness , namely Input-based attack and Universal attack . Input-based attack evaluates energy robustness where testing inputs are semantically meaningful to the Neural ODE model ( e.g. , meaningful images ) . On the other hand , Universal attack evaluates worst-case energy robustness where each testing input maximizes the energy consumption for each target ODE solver . To the best of our knowledge , this is the first energy-based adversarial attack against Neural ODEs . We evaluate NODEAttack on mainly two criteria : effectiveness and transferability using the CIFAR-10 and MNIST datasets ( Krizhevsky et al . ( 2009 ) ; Deng ( 2012 ) ) . We evaluated NODEAttack on two popular ODE solvers : Dopri5 ( Dormand and Prince ( 1980 ) ) and Adaptive Heun Süli and Mayers ( 2003 ) . We evaluate the the effectiveness of NODEAttack against natural perturbations and corruptions Hendrycks and Dietterich ( 2019a ) . We observed that NODEAttack generated adversarial inputs can increase up to 168 % energy consumption than the average energy consumed by benign test inputs . Also , we noticed that transferability is feasible between two Neural ODEs differentiated by ODE solver or model architecture . Our paper makes the following contributions : • Problem Formulation and Approach . Our work is the first attempt to formulate energybased adversarial attack against Neural ODE models . Also , our work proposes a novel loss function based on step-size of ODE solvers to generate adversarial inputs . • Evaluation . We evaluate our approach across two ODE solvers and two datasets based on two criteria . 2 BACKGROUND AND RELATED WORKS . 2.1 NEURAL ORDINARY DIFFERENTIAL EQUATIONS . Neural Ordinary Differential Equations ( Neural ODE ) ( Chen et al . ( 2018 ) ) have been successful in attaining accuracy close to the State of the Art DNN techniques but with lesser memory consumption . Neural ODEs incorporate Ordinary Differential Equations solvers into Neural Network architectures . Models such as residual networks and recurrent neural network decoders create complicated transformations by devising a sequence of transformations to a hidden state : ht+1 = ht + f ( ht , θt ) Operation of a residual block can be interpreted as the discrete approximation of an ODE where the discretization step value is one . In a neural ODE , the discretization step is set to zero and the relation between input , and output is characterized by the following set of equations : dh ( t ) dt = f ( h ( t ) , t , θ ) , h ( 0 ) = hin , hout = h ( T ) Solving h ( T ) gives the output and ODE solvers can be used for that purpose . Additionally , Proposed work by Quaglino et al . ( 2019 ) expresses the Neural ODE dynamics as truncated series of Legendre polynomials and accelerate the model . Dupont et al . ( 2019 ) explores the limitations in approximation capabilities of neural ODEs because of the preserving of input topology . Recent work by Yan et al . ( 2019 ) explore the robustness of Neural ODEs against Neural ODEs and propose TisODE to increase the robustness of Neural ODEs . However , no other work has focused on the energy robustness perspective or Neural ODEs , and to our knowledge , this is the first work in that direction . 2.2 RUNGE KUTTA METHOD . Runge Kutta method ( Runge ( 1895 ) ; Kutta ( 1901 ) ) is an ODE solver which solved ordinary differential equations through approximation . First-order differential equation given by , dy ( t ) dt = y′ ( t ) = f ( y ( t ) , t ) , with y ( t0 ) = y0 Here y is an function of time t and y∗ is the value of y at t = 0 . Four slope approximations k1 , k2 , k3 , k4 are used to estimate approximate value of y ( y∗ ) at t = t0 ( Detailed equation in Appendix ) . Final estimate of y∗ ( t0 + h ) can be represented as , y∗ ( t0 + h ) = y∗ ( t0 ) + ( 1 6 .k1 + 1 3 .k2 + 1 3 .k3 + 1 6 .k4 ) .h Here , h is the step size . This is called fourth order Runge Kutta Method , because the local error ( approximation error at a particular time ) for step-size h is O ( h4 ) . For better approximation of function , multiple works ( Dormand and Prince ( 1980 ) ; Süli and Mayers ( 2003 ) ) have proposed to use adaptive step size . 2.3 ADVERSARIAL EXAMPLES . Adversarial Examples are the inputs that are fed to machine learning models to change the prediction of the model . In earlier works by Dalvi et al . ( 2004 ) ; Lowd and Meek ( 2005 ) ; D.Lowd and C.Meek ( 2005 ) , ‘ good word attacks ’ or spelling modifications have long been used to bypass the spam filters . More recently , Szegedy et al . ( 2013 ) and Goodfellow et al . ( 2014 ) propose adversarial attacks on deep computer vision models . Karmon et al . ( 2018 ) propose a technique to attack CNNs in which a localized patch is introduced in an image instead of adding noise to the full image . With a similar approach , adversarial attacks have been extended to various fields like text and speech processing ( Carlini et al . ( 2016 ) ; Jia and Liang ( 2017 ) ) , and graph models ( Zügner et al . ( 2018 ) ; Bojchevski and Günnemann ( 2019 ) ) . Recently , Haque et al . ( 2020 ) ; Hong et al . ( 2020 ) have proposed adversarial energy based attacks against Adaptive Neural Networks . However , as mentioned in the introduction , existing attacks can not be used to increase energy consumption of Neural ODEs . | The authors propose an adversarial test-time attack on Neural ODE models, which increases the inference time of the NODE models. Two variants of the attack are introduced. The proposed attacks are evaluated on CIFAR10 and MNIST datasets. In experiments, the authors showed the proposed attacks increased the inference time of NODE models for object detection task, which drained the battery faster on mobile device. | SP:10ab18741e688057c659201f21972b830661b459 |
NODEAttack: Adversarial Attack on the Energy Consumption of Neural ODEs | Recently , Neural ODE ( Ordinary Differential Equation ) models have been proposed , which use ordinary differential equation solving to predict the output of neural network . Due to the low memory usage , Neural ODE models can be considered as an alternative that can be deployed in resource-constrained devices ( e.g. , IoT devices , mobile devices ) . However , to deploy a Deep Learning model in resource-constrained devices , low inference energy cost is also required along with low memory cost . Unlike the memory cost , the energy consumption of the Neural ODEs during inference can be adaptive because of the adaptive nature of the ODE solvers . Attackers can leverage the adaptive behaviour of Neural ODEs to attack the energy consumption of Neural ODEs . However , energy-based attack scenarios have not been explored against Neural ODEs . To show the vulnerability of Neural ODEs against adversarial energy-based attack , we propose NODEAttack . The objective of NODEAttack is to generate adversarial inputs that require more ODE solvers computations , therefore increasing neural ODEs inference-time energy consumption . Our extensive evaluation on two datasets and two popular ODE solvers show that the samples generated through NODEAttack can increase up to 168 % energy consumption than average energy consumption of benign test data during inference time . Our evaluation also shows the attack transferability is feasible across solvers and architectures . Also , we perform a case study showing the impact of the generated adversarial examples , which shows that NODEAttack generated adversarial examples can decrease 50 % efficiency of an object-recognition-based mobile application . 1 INTRODUCTION . Deep Neural Networks ( DNNs ) have shown great potential in many challenging tasks ( image classification , natural language process , and playing games ) . To cope with tasks with higher complexity , the number of DNN parameters is increasing rapidly . Because of this reason , DNNs require considerable memory usage both in training and inference . To address the issue of increased memory usage , researchers simulate the solver of ordinary differential equation ( ODE ) and propose Neural ODE techniques ( Chen et al. , 2018 ) . Neural ODE does not store any intermediate quantities of the forward pass and allows us to train DNNs with constant memory cost . Neural ODE also performs better than traditional DNNs for irregularly sampled time series data . Because of decreased memory cost , Neural ODEs are viable options to be used in resource-constrained devices like mobile devices or UAVs ( Unmanned Aerial Vehicle ) . Due to the adaptive energy consumption of Neural ODE models ( Section 3 ) , the model robustness in terms of energy consumption or energy robustness ( Defined in 4.1 ) of the model needs to be investigated to deploy Neural ODE models in resource-constrained devices . Otherwise , lack of energy robustness in Neural ODEs can lead to tragic situations . For example , we assume that Neural ODE model is deployed for mobile apps , which are used to help visually impaired people . The energy consumption of the model is not robust , then the battery of the mobile device will be drained faster . This can be fatal for the visually impaired person . To avoid such scenarios , evaluating energy robustness of Neural ODEs is required to avoid unwanted incidents . Although energy robustness of Neural ODEs has not been explored , unlike accuracy-based robustness . Recent work shows that Neural ODE models are more robust against accuracy-based adversarial attacks ( Yan et al . ( 2019 ) ) than traditional DNNs . However , finding a relationship between input and energy consumption is more challenging because the relation between input and energy consumption of DNNs is not well-defined . To explore the energy robustness of Neural ODEs , the relationship between input and energy consumption of Neural ODEs needs to be defined . Recent works , ILFO ( Haque et al . ( 2020 ) ) and DeepSloth ( Hong et al . ( 2020 ) ) , have evaluated energy robustness of Adaptive Neural Networks ( AdNNs ) by proposing white-box attack . However , optimizing the loss function proposed by the aforementioned AdNN attacks can not evaluate the energy robustness of Neural ODEs . AdNNs activate or deactivate certain components of DNNs based on the intermediate outputs of certain computing units and consumes a different amount of energy based on different inputs . Both the attack ’ s objective is to increase the number of activated DNN components by modifying the specific computing unit outputs , and both attack use specific loss function optimization to achieve that . However , Neural ODE functionality is different than traditional AdNN functionality because , for Neural ODE , no component is deactivated or activated during inference . The adaptive behavior of a Neural ODE model depends on the adaptive ODE solver used to predict the output . Furthermore , for a specific trained Neural ODE model , we can find variable energy consumption for single input depending on the type of ODE solver used , where for traditional AdNNs , energy consumption based on a specific input will be the same always for a specific trained AdNN . Therefore , a novel approach is needed to explore the energy robustness of Neural ODEs . To explore the energy robustness of Neural ODEs , We propose NODEAttack , a white-box approach that uses step-size of the ODE solvers to formulate attack . ODE solvers use an iterative way to approximate a function , and the objective of our approach is to increase the number of iterations , increasing the energy consumption of Neural ODEs . Our attack formulation is based on the fact that decreasing step-size would increase the number of iterations . Specifically , we develop two attack techniques to evaluate Neural ODE ’ s energy robustness , namely Input-based attack and Universal attack . Input-based attack evaluates energy robustness where testing inputs are semantically meaningful to the Neural ODE model ( e.g. , meaningful images ) . On the other hand , Universal attack evaluates worst-case energy robustness where each testing input maximizes the energy consumption for each target ODE solver . To the best of our knowledge , this is the first energy-based adversarial attack against Neural ODEs . We evaluate NODEAttack on mainly two criteria : effectiveness and transferability using the CIFAR-10 and MNIST datasets ( Krizhevsky et al . ( 2009 ) ; Deng ( 2012 ) ) . We evaluated NODEAttack on two popular ODE solvers : Dopri5 ( Dormand and Prince ( 1980 ) ) and Adaptive Heun Süli and Mayers ( 2003 ) . We evaluate the the effectiveness of NODEAttack against natural perturbations and corruptions Hendrycks and Dietterich ( 2019a ) . We observed that NODEAttack generated adversarial inputs can increase up to 168 % energy consumption than the average energy consumed by benign test inputs . Also , we noticed that transferability is feasible between two Neural ODEs differentiated by ODE solver or model architecture . Our paper makes the following contributions : • Problem Formulation and Approach . Our work is the first attempt to formulate energybased adversarial attack against Neural ODE models . Also , our work proposes a novel loss function based on step-size of ODE solvers to generate adversarial inputs . • Evaluation . We evaluate our approach across two ODE solvers and two datasets based on two criteria . 2 BACKGROUND AND RELATED WORKS . 2.1 NEURAL ORDINARY DIFFERENTIAL EQUATIONS . Neural Ordinary Differential Equations ( Neural ODE ) ( Chen et al . ( 2018 ) ) have been successful in attaining accuracy close to the State of the Art DNN techniques but with lesser memory consumption . Neural ODEs incorporate Ordinary Differential Equations solvers into Neural Network architectures . Models such as residual networks and recurrent neural network decoders create complicated transformations by devising a sequence of transformations to a hidden state : ht+1 = ht + f ( ht , θt ) Operation of a residual block can be interpreted as the discrete approximation of an ODE where the discretization step value is one . In a neural ODE , the discretization step is set to zero and the relation between input , and output is characterized by the following set of equations : dh ( t ) dt = f ( h ( t ) , t , θ ) , h ( 0 ) = hin , hout = h ( T ) Solving h ( T ) gives the output and ODE solvers can be used for that purpose . Additionally , Proposed work by Quaglino et al . ( 2019 ) expresses the Neural ODE dynamics as truncated series of Legendre polynomials and accelerate the model . Dupont et al . ( 2019 ) explores the limitations in approximation capabilities of neural ODEs because of the preserving of input topology . Recent work by Yan et al . ( 2019 ) explore the robustness of Neural ODEs against Neural ODEs and propose TisODE to increase the robustness of Neural ODEs . However , no other work has focused on the energy robustness perspective or Neural ODEs , and to our knowledge , this is the first work in that direction . 2.2 RUNGE KUTTA METHOD . Runge Kutta method ( Runge ( 1895 ) ; Kutta ( 1901 ) ) is an ODE solver which solved ordinary differential equations through approximation . First-order differential equation given by , dy ( t ) dt = y′ ( t ) = f ( y ( t ) , t ) , with y ( t0 ) = y0 Here y is an function of time t and y∗ is the value of y at t = 0 . Four slope approximations k1 , k2 , k3 , k4 are used to estimate approximate value of y ( y∗ ) at t = t0 ( Detailed equation in Appendix ) . Final estimate of y∗ ( t0 + h ) can be represented as , y∗ ( t0 + h ) = y∗ ( t0 ) + ( 1 6 .k1 + 1 3 .k2 + 1 3 .k3 + 1 6 .k4 ) .h Here , h is the step size . This is called fourth order Runge Kutta Method , because the local error ( approximation error at a particular time ) for step-size h is O ( h4 ) . For better approximation of function , multiple works ( Dormand and Prince ( 1980 ) ; Süli and Mayers ( 2003 ) ) have proposed to use adaptive step size . 2.3 ADVERSARIAL EXAMPLES . Adversarial Examples are the inputs that are fed to machine learning models to change the prediction of the model . In earlier works by Dalvi et al . ( 2004 ) ; Lowd and Meek ( 2005 ) ; D.Lowd and C.Meek ( 2005 ) , ‘ good word attacks ’ or spelling modifications have long been used to bypass the spam filters . More recently , Szegedy et al . ( 2013 ) and Goodfellow et al . ( 2014 ) propose adversarial attacks on deep computer vision models . Karmon et al . ( 2018 ) propose a technique to attack CNNs in which a localized patch is introduced in an image instead of adding noise to the full image . With a similar approach , adversarial attacks have been extended to various fields like text and speech processing ( Carlini et al . ( 2016 ) ; Jia and Liang ( 2017 ) ) , and graph models ( Zügner et al . ( 2018 ) ; Bojchevski and Günnemann ( 2019 ) ) . Recently , Haque et al . ( 2020 ) ; Hong et al . ( 2020 ) have proposed adversarial energy based attacks against Adaptive Neural Networks . However , as mentioned in the introduction , existing attacks can not be used to increase energy consumption of Neural ODEs . | Neural ODE (Ordinary Differential Equation) models have emerged as an effective architecture due to their low memory usage which makes them attractive for resource-constrained devices. Further, the inference in these models is adaptive. This paper investigates whether this adaptive inference can be used by an attacker to launch an energy attack on the neural ODEs. While energy attacks on adaptive inference has been studied before, this paper is the first paper (to the best of the knowledge of the reviewer) to study this on neural ODEs. This study is well-motivated, and the experimental evaluation is convincing (though it is far from complete in terms of the used attack methods or configurations of the used methods). 2 solvers and 2 datasets are used in the experiments. Overall, the paper definitely adds to the state of the art on neural ODEs and robust learning. | SP:10ab18741e688057c659201f21972b830661b459 |
NODEAttack: Adversarial Attack on the Energy Consumption of Neural ODEs | Recently , Neural ODE ( Ordinary Differential Equation ) models have been proposed , which use ordinary differential equation solving to predict the output of neural network . Due to the low memory usage , Neural ODE models can be considered as an alternative that can be deployed in resource-constrained devices ( e.g. , IoT devices , mobile devices ) . However , to deploy a Deep Learning model in resource-constrained devices , low inference energy cost is also required along with low memory cost . Unlike the memory cost , the energy consumption of the Neural ODEs during inference can be adaptive because of the adaptive nature of the ODE solvers . Attackers can leverage the adaptive behaviour of Neural ODEs to attack the energy consumption of Neural ODEs . However , energy-based attack scenarios have not been explored against Neural ODEs . To show the vulnerability of Neural ODEs against adversarial energy-based attack , we propose NODEAttack . The objective of NODEAttack is to generate adversarial inputs that require more ODE solvers computations , therefore increasing neural ODEs inference-time energy consumption . Our extensive evaluation on two datasets and two popular ODE solvers show that the samples generated through NODEAttack can increase up to 168 % energy consumption than average energy consumption of benign test data during inference time . Our evaluation also shows the attack transferability is feasible across solvers and architectures . Also , we perform a case study showing the impact of the generated adversarial examples , which shows that NODEAttack generated adversarial examples can decrease 50 % efficiency of an object-recognition-based mobile application . 1 INTRODUCTION . Deep Neural Networks ( DNNs ) have shown great potential in many challenging tasks ( image classification , natural language process , and playing games ) . To cope with tasks with higher complexity , the number of DNN parameters is increasing rapidly . Because of this reason , DNNs require considerable memory usage both in training and inference . To address the issue of increased memory usage , researchers simulate the solver of ordinary differential equation ( ODE ) and propose Neural ODE techniques ( Chen et al. , 2018 ) . Neural ODE does not store any intermediate quantities of the forward pass and allows us to train DNNs with constant memory cost . Neural ODE also performs better than traditional DNNs for irregularly sampled time series data . Because of decreased memory cost , Neural ODEs are viable options to be used in resource-constrained devices like mobile devices or UAVs ( Unmanned Aerial Vehicle ) . Due to the adaptive energy consumption of Neural ODE models ( Section 3 ) , the model robustness in terms of energy consumption or energy robustness ( Defined in 4.1 ) of the model needs to be investigated to deploy Neural ODE models in resource-constrained devices . Otherwise , lack of energy robustness in Neural ODEs can lead to tragic situations . For example , we assume that Neural ODE model is deployed for mobile apps , which are used to help visually impaired people . The energy consumption of the model is not robust , then the battery of the mobile device will be drained faster . This can be fatal for the visually impaired person . To avoid such scenarios , evaluating energy robustness of Neural ODEs is required to avoid unwanted incidents . Although energy robustness of Neural ODEs has not been explored , unlike accuracy-based robustness . Recent work shows that Neural ODE models are more robust against accuracy-based adversarial attacks ( Yan et al . ( 2019 ) ) than traditional DNNs . However , finding a relationship between input and energy consumption is more challenging because the relation between input and energy consumption of DNNs is not well-defined . To explore the energy robustness of Neural ODEs , the relationship between input and energy consumption of Neural ODEs needs to be defined . Recent works , ILFO ( Haque et al . ( 2020 ) ) and DeepSloth ( Hong et al . ( 2020 ) ) , have evaluated energy robustness of Adaptive Neural Networks ( AdNNs ) by proposing white-box attack . However , optimizing the loss function proposed by the aforementioned AdNN attacks can not evaluate the energy robustness of Neural ODEs . AdNNs activate or deactivate certain components of DNNs based on the intermediate outputs of certain computing units and consumes a different amount of energy based on different inputs . Both the attack ’ s objective is to increase the number of activated DNN components by modifying the specific computing unit outputs , and both attack use specific loss function optimization to achieve that . However , Neural ODE functionality is different than traditional AdNN functionality because , for Neural ODE , no component is deactivated or activated during inference . The adaptive behavior of a Neural ODE model depends on the adaptive ODE solver used to predict the output . Furthermore , for a specific trained Neural ODE model , we can find variable energy consumption for single input depending on the type of ODE solver used , where for traditional AdNNs , energy consumption based on a specific input will be the same always for a specific trained AdNN . Therefore , a novel approach is needed to explore the energy robustness of Neural ODEs . To explore the energy robustness of Neural ODEs , We propose NODEAttack , a white-box approach that uses step-size of the ODE solvers to formulate attack . ODE solvers use an iterative way to approximate a function , and the objective of our approach is to increase the number of iterations , increasing the energy consumption of Neural ODEs . Our attack formulation is based on the fact that decreasing step-size would increase the number of iterations . Specifically , we develop two attack techniques to evaluate Neural ODE ’ s energy robustness , namely Input-based attack and Universal attack . Input-based attack evaluates energy robustness where testing inputs are semantically meaningful to the Neural ODE model ( e.g. , meaningful images ) . On the other hand , Universal attack evaluates worst-case energy robustness where each testing input maximizes the energy consumption for each target ODE solver . To the best of our knowledge , this is the first energy-based adversarial attack against Neural ODEs . We evaluate NODEAttack on mainly two criteria : effectiveness and transferability using the CIFAR-10 and MNIST datasets ( Krizhevsky et al . ( 2009 ) ; Deng ( 2012 ) ) . We evaluated NODEAttack on two popular ODE solvers : Dopri5 ( Dormand and Prince ( 1980 ) ) and Adaptive Heun Süli and Mayers ( 2003 ) . We evaluate the the effectiveness of NODEAttack against natural perturbations and corruptions Hendrycks and Dietterich ( 2019a ) . We observed that NODEAttack generated adversarial inputs can increase up to 168 % energy consumption than the average energy consumed by benign test inputs . Also , we noticed that transferability is feasible between two Neural ODEs differentiated by ODE solver or model architecture . Our paper makes the following contributions : • Problem Formulation and Approach . Our work is the first attempt to formulate energybased adversarial attack against Neural ODE models . Also , our work proposes a novel loss function based on step-size of ODE solvers to generate adversarial inputs . • Evaluation . We evaluate our approach across two ODE solvers and two datasets based on two criteria . 2 BACKGROUND AND RELATED WORKS . 2.1 NEURAL ORDINARY DIFFERENTIAL EQUATIONS . Neural Ordinary Differential Equations ( Neural ODE ) ( Chen et al . ( 2018 ) ) have been successful in attaining accuracy close to the State of the Art DNN techniques but with lesser memory consumption . Neural ODEs incorporate Ordinary Differential Equations solvers into Neural Network architectures . Models such as residual networks and recurrent neural network decoders create complicated transformations by devising a sequence of transformations to a hidden state : ht+1 = ht + f ( ht , θt ) Operation of a residual block can be interpreted as the discrete approximation of an ODE where the discretization step value is one . In a neural ODE , the discretization step is set to zero and the relation between input , and output is characterized by the following set of equations : dh ( t ) dt = f ( h ( t ) , t , θ ) , h ( 0 ) = hin , hout = h ( T ) Solving h ( T ) gives the output and ODE solvers can be used for that purpose . Additionally , Proposed work by Quaglino et al . ( 2019 ) expresses the Neural ODE dynamics as truncated series of Legendre polynomials and accelerate the model . Dupont et al . ( 2019 ) explores the limitations in approximation capabilities of neural ODEs because of the preserving of input topology . Recent work by Yan et al . ( 2019 ) explore the robustness of Neural ODEs against Neural ODEs and propose TisODE to increase the robustness of Neural ODEs . However , no other work has focused on the energy robustness perspective or Neural ODEs , and to our knowledge , this is the first work in that direction . 2.2 RUNGE KUTTA METHOD . Runge Kutta method ( Runge ( 1895 ) ; Kutta ( 1901 ) ) is an ODE solver which solved ordinary differential equations through approximation . First-order differential equation given by , dy ( t ) dt = y′ ( t ) = f ( y ( t ) , t ) , with y ( t0 ) = y0 Here y is an function of time t and y∗ is the value of y at t = 0 . Four slope approximations k1 , k2 , k3 , k4 are used to estimate approximate value of y ( y∗ ) at t = t0 ( Detailed equation in Appendix ) . Final estimate of y∗ ( t0 + h ) can be represented as , y∗ ( t0 + h ) = y∗ ( t0 ) + ( 1 6 .k1 + 1 3 .k2 + 1 3 .k3 + 1 6 .k4 ) .h Here , h is the step size . This is called fourth order Runge Kutta Method , because the local error ( approximation error at a particular time ) for step-size h is O ( h4 ) . For better approximation of function , multiple works ( Dormand and Prince ( 1980 ) ; Süli and Mayers ( 2003 ) ) have proposed to use adaptive step size . 2.3 ADVERSARIAL EXAMPLES . Adversarial Examples are the inputs that are fed to machine learning models to change the prediction of the model . In earlier works by Dalvi et al . ( 2004 ) ; Lowd and Meek ( 2005 ) ; D.Lowd and C.Meek ( 2005 ) , ‘ good word attacks ’ or spelling modifications have long been used to bypass the spam filters . More recently , Szegedy et al . ( 2013 ) and Goodfellow et al . ( 2014 ) propose adversarial attacks on deep computer vision models . Karmon et al . ( 2018 ) propose a technique to attack CNNs in which a localized patch is introduced in an image instead of adding noise to the full image . With a similar approach , adversarial attacks have been extended to various fields like text and speech processing ( Carlini et al . ( 2016 ) ; Jia and Liang ( 2017 ) ) , and graph models ( Zügner et al . ( 2018 ) ; Bojchevski and Günnemann ( 2019 ) ) . Recently , Haque et al . ( 2020 ) ; Hong et al . ( 2020 ) have proposed adversarial energy based attacks against Adaptive Neural Networks . However , as mentioned in the introduction , existing attacks can not be used to increase energy consumption of Neural ODEs . | In this paper, the authors proposed an attack model named NODEAttack to verify the vulnerability of Neural ODEs against energy-surging adversarial samples. The authors study two attack cases: Input-based attack (untargeted attack) and universal untargeted attack. The experiments on CIFAR-10 and MNIST datasets show the reasonable performance of the proposed model outperforms common perturbations and corruptions methods. A case study based on DNN compilers and PyTorch mobile demonstrates the generated adversarial examples can reduce the efficiency of real-world applications. | SP:10ab18741e688057c659201f21972b830661b459 |
Filtered-CoPhy: Unsupervised Learning of Counterfactual Physics in Pixel Space | 1 INTRODUCTION . Reasoning on complex , multi-modal and high-dimensional data is a natural ability of humans and other intelligent agents ( Martin-Ordas et al. , 2008 ) , and one of the most important and difficult challenges of AI . While machine learning is well suited for capturing regularities in high-dimensional signals , in particular by using high-capacity deep networks , some applications also require an accurate modeling of causal relationships . This is particularly relevant in physics , where causation is considered as a fundamental axiom . In the context of machine learning , correctly capturing or modeling causal relationships can also lead to more robust predictions , in particular better generalization to out-of-distribution samples , indicating that a model has overcome the exploitation of biases and shortcuts in the training data . In recent literature on physics-inspired machine learning , causality has often been forced through the addition of prior knowledge about the physical laws that govern the studied phenomena , e.g . ( Yin et al. , 2021 ) . A similar idea lies behind structured causal models , widely used in the causal inference community , where domain experts model these relationships directly in a graphical notation . This particular line of work allows to perform predictions beyond statistical forecasting , for instance by predicting unobserved counterfactuals , the impact of unobserved interventions ( Balke & Pearl , 1994 ) — “ What alternative outcome would have happened , if the observed event X had been replaced with an event Y ( after an intervention ) ” . Counterfactuals are interesting , as causality intervenes through the effective modification of an outcome . As an example , taken from ( Schölkopf et al. , 2021 ) , an agent can identify the direction of a causal relationship between an umbrella and rain from the fact that removing an umbrella will not affect the weather . We focus on counterfactual reasoning on high-dimensional signals , in particular videos of complex physical processes . Learning such causal interactions from data is a challenging task , as spurious correlations are naturally and easily picked up by trained models . Previous work in this direction was restricted to discrete outcomes , as in CLEVRER ( Yi et al. , 2020 ) , or to the prediction of 3D trajectories , as in CoPhy ( Baradel et al. , 2020 ) , which also requires supervision of object positions . In this work , we address the hard problem of predicting the alternative ( counterfactual ) outcomes of physical processes in pixel space , i.e . we forecast sequences of 2D projective views of the 3D scene , requiring the prediction over long horizons ( 150 frames corresponding to ∼ 6 seconds ) . We conjecture that causal relationships can be modeled on a low dimensional manifold of the data , and propose a suitable latent representation for the causal model , in particular for the estimation of the confounders and the dynamic model itself . Similar to V-CDN ( Kulkarni et al. , 2019 ; Li et al. , 2020 ) , our latent representation is based on the unsupervised discovery of keypoints , complemented by additional information in our case . Indeed , while keypoint-based representations can easily be encoded from visual input , as stable mappings from images to points arise naturally , we claim that they are not the most suitable representation for dynamic models . We identified and addressed two principal problems : ( i ) the individual points of a given set are discriminated through their 2D positions only , therefore shape , geometry and relationships between multiple moving objects need to be encoded through the relative positions of points to each other , and ( ii ) the optimal representation for a physical dynamic model is not necessarily a 2D keypoint space , where the underlying object dynamics has also been subject to the imaging process ( projective geometry ) . We propose a new counterfactual model , which learns a sparse representation of visual input in the form of 2D keypoints coupled with a ( small ) set of coefficients per point modeling complementary shape and appearance information . Confounders ( object masses and initial velocities ) in the studied problem are extracted from this representation , and a learned dynamic model forecasts the entire trajectory of these keypoints from a single ( counterfactual ) observation . Building on recent work in data-driven analysis of dynamic systems ( Janny et al. , 2021 ; Peralez & Nadri , 2021 ) , the dynamic model is presented in a higher-dimensional state space , where dynamics are less complex . We show , that these design choices are key to the performance of our model , and that they significantly improve the capability to perform long-term predictions . Our proposed model outperforms strong baselines for physics-informed learning of video prediction . We introduce a new challenging dataset for this problem , which builds on CoPhy , a recent counterfactual physics benchmark ( Baradel et al. , 2020 ) . We go beyond the prediction of sequences of 3D positions and propose a counterfactual task for predictions in pixel space after interventions on initial conditions ( displacing , re-orienting or removing objects ) . In contrast to the literature , our benchmark also better controls for the identifiability of causal relationships and counterfactual variables and provides more accurate physics simulation . 2 RELATED WORK . Counterfactual ( CF ) reasoning — and learning of causal relationships in ML was made popular by works of J. Pearl , e.g . ( Pearl , 2000 ) , which motivate and introduce mathematical tools detailing the principles of do-calculus , i.e . study of unobserved interventions on data . A more recent survey links these concepts to the literature in ML ( Schölkopf et al. , 2021 ) . The last years have seen the emergence of several benchmarks for CF reasoning in physics . CLEVRER ( Yi et al. , 2020 ) is a visual question answering dataset , where an agent is required to answer a CF question after observing a video showing 3D objects moving and colliding . Li et al . ( 2020 ) introduce a CF benchmark with two tasks : a scenario where balls interact with each other according to unknown interaction laws ( such as gravity or elasticity ) , and a scenario where clothes are folded by the wind . The agent needs to identify CF variables and causal relationships between objects , and to predict future frames . CoPhy ( Baradel et al. , 2020 ) clearly dissociates the observed experiment from the CF one , and contains three complex 3D scenarios involving rigid body dynamics . However , the proposed method relies on the supervision of 3D object positions , while our work does not require any meta data . Physics-inspired ML — and learning visual dynamics has been dealt early on with recurrent models ( Srivastava et al. , 2015 ; Finn et al. , 2016 ; Lu et al. , 2017 ) , or GANs ( Vondrick et al. , 2016 ; Mathieu et al. , 2016 ) . Kwon & Park ( 2019 ) adopt a Cycle-GAN with two discriminator heads , in charge of identifying false images and false sequences in order to improve the temporal consistency of the model in long term prediction . Nonetheless , the integration of causal reasoning and prior knowledge in these models is not straightforward . Typical work in physics-informed models relies on disentanglement between physics-informed features and residual features ( Villegas et al. , 2017a ; Denton & Birodkar , 2017 ) and may incorporate additional information based on the available priors on the scene ( Villegas et al. , 2017b ; Walker et al. , 2017 ) . PhyDNet Le Guen & Thome ( 2020 ) explicitly disentangles visual features from dynamical features , which are supposed to follow a PDE . It achieves SOTA performance on Human3.6M ( Ionescu et al. , 2014 ) and Sea Surface Temperature ( de Bezenac et al. , 2018 ) , but we show that it fails on our challenging benchmark . Keypoint detection — is a well researched problem in vision with widely used handcrafted baselines ( Lowe , 1999 ) . New unsupervised variants emerged recently and have been shown to provide a suitable object-centric representation , close to attention models , which simplify the use of physical and/or geometric priors ( Locatello et al. , 2020 ; Veerapaneni et al. , 2020 ) . They are of interest in robotics and reinforcement learning , where a physical agent has to interact with objects ( Kulkarni et al. , 2019 ; Manuelli et al. , 2020 ; 2019 ) . KeypointsNet ( Suwajanakorn et al. , 2018 ) is a geometric reasoning framework , which discovers meaningful keypoints in 3D through spatial coherence between viewpoints . Close to our work , ( Minderer et al. , 2019 ) proposes to learn a keypoints-based stochastic dynamic model . However , the model is not suited for CF reasoning in physics and may suffer from inconsistency in the prediction of dynamics over long horizons . 3 THE FILTERED-COPHY BENCHMARK We build on CoPhy ( Baradel et al. , 2020 ) , retaining its strengths , but explicitly focusing on a counterfactual scenario in pixel space and eliminating the ill-posedness of tasks we identified in the existing work . Each data sample is called an experiment , represented as a pair of trajectories : an observed one with initial conditionX0 = A and outcomeXt=1 .. T = B ( a sequence ) , and a counterfactual one X̄0 = C and X̄t=1 .. T = D ( a sequence ) . Throughout this paper we will use the letters A , B , C and D to distinguish the different parts of each experiment . The initial conditions A and C are linked through a do-operator do ( X0 = C ) , which modifies the initial condition ( Pearl , 2018 ) . Experiments are parameterized by a set of intrinsic physical parameters z which are not observable from a single initial image A . We refer to these as confounders . As in CoPhy , in our benchmark the do-operator is observed during training , but confounders are not — they have been used to generate the data , but are not used during training or testing . Following ( Pearl , 2018 ) , the counterfactual task consists in inferring the counterfactual outcome D given the observed trajectory AB and the counterfactual initial state C , following a three-step process : À Abduction : use the observed data AB to compute the counterfactual variables , i.e . physical parameters , which are not affected by the do-operation . Á Action : update the causal model ; keep the same identified confounders and apply the dooperator , i.e . replace the initial state A by C. Â Prediction : Compute the counterfactual outcome D using the causal graph . The benchmark contains three scenarios involving rigid body dynamics . BlocktowerCF studies stable and unstable 3D cube towers , the confounders are masses . BallsCF focuses on 2D collisions between moving spheres ( confounders are masses and initial velocities ) . CollisionCF is about collisions between a sphere and a cylinder ( confounders are masses and initial velocities ) ( Fig . 1 ) . Unlike CoPhy , our benchmark involves predictions in RGB pixel space only . The do-operation consists in visually observable interventions on A , such as moving or removing an object . The confounders can not be identified from the single-frame observation A , identification requires the analysis of the entire AB trajectory . Identifiability of confounders — For an experiment ( AB , CD , z ) to be well-posed , the confounders z must be retrievable from AB . For example , since the masses of a stable cube tower can not be identified generally in all situations , it can be impossible to predict the counterfactual outcome of an unstable tower , as collisions are not resolvable without known masses . In contrast to CoPhy , we ensure that each experiment ψ : ( X0 , z ) 7→ Xt=1 .. T , given initial condition X0 and confounders z , is well posed and satisfies the following constraints : Definition 1 ( Identifiability , ( Pearl , 2018 ) ) The experiment ( AB , CD , z ) is identifiable if , for any set of confounders z′ : ψ ( A , z ) = ψ ( A , z′ ) ⇒ ψ ( C , z ) = ψ ( C , z′ ) . ( 1 ) In an identifiable experiment there is no pair ( z , z′ ) that gives the same trajectory AB but different counterfactual outcomes CD . Details on implementation and impact are in appendix A.1 . Counterfactuality — We enforce sufficient difficulty of the problem through the meaningfulness of confounders . We remove initial situations where the choice of confounder values has no significant impact on the final outcome : Definition 2 ( Counterfactuality ) . Let zk be the set of confounders z , where the kth value has been modified . The experiment ( AB , CD , z ) is counterfactual if and only if : ∃k : ψ ( C , zk ) 6= ψ ( C , z ) . ( 2 ) In other words , we impose the existence of an object of the scene for which the ( unobserved ) physical properties have a determining effect on the trajectory . Details on how this constraint was enforced are given in appendix A.2 . Temporal resolution — the physical laws we target involve highly non-linear phenomena , in particular collision and resting contacts . Collisions are difficult to learn because their actions are both intense , brief , and highly non-linear , depending on the geometry of the objects in 3D space . The temporal resolution of physical simulations is of prime importance . A parallel can be made with Nyquist-Shannon frequency , as a trajectory sampled with too low frequency can not be reconstructed with precision . We simulate and record trajectories at 25 FPS , compared to 5 FPS chosen in CoPhy , justified with two experiments . Firstly , Fig . 2 shows the trajectories of the center of masses of cubes in BlocktowerCF , colored dots are shown at 25 FPS and black dots at 5 FPS . We can see that collisions with the ground fall below the sampling rate of 5 FPS , making it hard to infer physical laws from regularities in data at this frequency . A second experiment involves learning a prediction model at different frequencies , confirming the choice 25 FPS — details are given in appendix A.3 . | This paper studies an interesting problem of counterfactual video prediction, which aims to predict the future frame (D) based on the initial frame (C) and an observed video sequence (AB). AB can be seen as a demonstration that is driven by the same confounders, i.e., the physics parameters such as mass and initial velocities. The paper follows the previous models from CoPhyNet for dynamics modeling and confounder estimation, and has two improvements over the work of CoPhy, in my point of view: - First, it improves the CoPhy benchmark. - Second, it presents a new model based on the representations of high-dimensional features, 2D keypoints, and corresponding coefficients. The new form of representation allows the model to be trained in an unsupervised manner only with the supervision of RGB images, as opposed to the training procedure with the supervision of object positions in CoPhyNet. | SP:154f672ce4fdc8c020389c9d043c4c00f482c630 |
Filtered-CoPhy: Unsupervised Learning of Counterfactual Physics in Pixel Space | 1 INTRODUCTION . Reasoning on complex , multi-modal and high-dimensional data is a natural ability of humans and other intelligent agents ( Martin-Ordas et al. , 2008 ) , and one of the most important and difficult challenges of AI . While machine learning is well suited for capturing regularities in high-dimensional signals , in particular by using high-capacity deep networks , some applications also require an accurate modeling of causal relationships . This is particularly relevant in physics , where causation is considered as a fundamental axiom . In the context of machine learning , correctly capturing or modeling causal relationships can also lead to more robust predictions , in particular better generalization to out-of-distribution samples , indicating that a model has overcome the exploitation of biases and shortcuts in the training data . In recent literature on physics-inspired machine learning , causality has often been forced through the addition of prior knowledge about the physical laws that govern the studied phenomena , e.g . ( Yin et al. , 2021 ) . A similar idea lies behind structured causal models , widely used in the causal inference community , where domain experts model these relationships directly in a graphical notation . This particular line of work allows to perform predictions beyond statistical forecasting , for instance by predicting unobserved counterfactuals , the impact of unobserved interventions ( Balke & Pearl , 1994 ) — “ What alternative outcome would have happened , if the observed event X had been replaced with an event Y ( after an intervention ) ” . Counterfactuals are interesting , as causality intervenes through the effective modification of an outcome . As an example , taken from ( Schölkopf et al. , 2021 ) , an agent can identify the direction of a causal relationship between an umbrella and rain from the fact that removing an umbrella will not affect the weather . We focus on counterfactual reasoning on high-dimensional signals , in particular videos of complex physical processes . Learning such causal interactions from data is a challenging task , as spurious correlations are naturally and easily picked up by trained models . Previous work in this direction was restricted to discrete outcomes , as in CLEVRER ( Yi et al. , 2020 ) , or to the prediction of 3D trajectories , as in CoPhy ( Baradel et al. , 2020 ) , which also requires supervision of object positions . In this work , we address the hard problem of predicting the alternative ( counterfactual ) outcomes of physical processes in pixel space , i.e . we forecast sequences of 2D projective views of the 3D scene , requiring the prediction over long horizons ( 150 frames corresponding to ∼ 6 seconds ) . We conjecture that causal relationships can be modeled on a low dimensional manifold of the data , and propose a suitable latent representation for the causal model , in particular for the estimation of the confounders and the dynamic model itself . Similar to V-CDN ( Kulkarni et al. , 2019 ; Li et al. , 2020 ) , our latent representation is based on the unsupervised discovery of keypoints , complemented by additional information in our case . Indeed , while keypoint-based representations can easily be encoded from visual input , as stable mappings from images to points arise naturally , we claim that they are not the most suitable representation for dynamic models . We identified and addressed two principal problems : ( i ) the individual points of a given set are discriminated through their 2D positions only , therefore shape , geometry and relationships between multiple moving objects need to be encoded through the relative positions of points to each other , and ( ii ) the optimal representation for a physical dynamic model is not necessarily a 2D keypoint space , where the underlying object dynamics has also been subject to the imaging process ( projective geometry ) . We propose a new counterfactual model , which learns a sparse representation of visual input in the form of 2D keypoints coupled with a ( small ) set of coefficients per point modeling complementary shape and appearance information . Confounders ( object masses and initial velocities ) in the studied problem are extracted from this representation , and a learned dynamic model forecasts the entire trajectory of these keypoints from a single ( counterfactual ) observation . Building on recent work in data-driven analysis of dynamic systems ( Janny et al. , 2021 ; Peralez & Nadri , 2021 ) , the dynamic model is presented in a higher-dimensional state space , where dynamics are less complex . We show , that these design choices are key to the performance of our model , and that they significantly improve the capability to perform long-term predictions . Our proposed model outperforms strong baselines for physics-informed learning of video prediction . We introduce a new challenging dataset for this problem , which builds on CoPhy , a recent counterfactual physics benchmark ( Baradel et al. , 2020 ) . We go beyond the prediction of sequences of 3D positions and propose a counterfactual task for predictions in pixel space after interventions on initial conditions ( displacing , re-orienting or removing objects ) . In contrast to the literature , our benchmark also better controls for the identifiability of causal relationships and counterfactual variables and provides more accurate physics simulation . 2 RELATED WORK . Counterfactual ( CF ) reasoning — and learning of causal relationships in ML was made popular by works of J. Pearl , e.g . ( Pearl , 2000 ) , which motivate and introduce mathematical tools detailing the principles of do-calculus , i.e . study of unobserved interventions on data . A more recent survey links these concepts to the literature in ML ( Schölkopf et al. , 2021 ) . The last years have seen the emergence of several benchmarks for CF reasoning in physics . CLEVRER ( Yi et al. , 2020 ) is a visual question answering dataset , where an agent is required to answer a CF question after observing a video showing 3D objects moving and colliding . Li et al . ( 2020 ) introduce a CF benchmark with two tasks : a scenario where balls interact with each other according to unknown interaction laws ( such as gravity or elasticity ) , and a scenario where clothes are folded by the wind . The agent needs to identify CF variables and causal relationships between objects , and to predict future frames . CoPhy ( Baradel et al. , 2020 ) clearly dissociates the observed experiment from the CF one , and contains three complex 3D scenarios involving rigid body dynamics . However , the proposed method relies on the supervision of 3D object positions , while our work does not require any meta data . Physics-inspired ML — and learning visual dynamics has been dealt early on with recurrent models ( Srivastava et al. , 2015 ; Finn et al. , 2016 ; Lu et al. , 2017 ) , or GANs ( Vondrick et al. , 2016 ; Mathieu et al. , 2016 ) . Kwon & Park ( 2019 ) adopt a Cycle-GAN with two discriminator heads , in charge of identifying false images and false sequences in order to improve the temporal consistency of the model in long term prediction . Nonetheless , the integration of causal reasoning and prior knowledge in these models is not straightforward . Typical work in physics-informed models relies on disentanglement between physics-informed features and residual features ( Villegas et al. , 2017a ; Denton & Birodkar , 2017 ) and may incorporate additional information based on the available priors on the scene ( Villegas et al. , 2017b ; Walker et al. , 2017 ) . PhyDNet Le Guen & Thome ( 2020 ) explicitly disentangles visual features from dynamical features , which are supposed to follow a PDE . It achieves SOTA performance on Human3.6M ( Ionescu et al. , 2014 ) and Sea Surface Temperature ( de Bezenac et al. , 2018 ) , but we show that it fails on our challenging benchmark . Keypoint detection — is a well researched problem in vision with widely used handcrafted baselines ( Lowe , 1999 ) . New unsupervised variants emerged recently and have been shown to provide a suitable object-centric representation , close to attention models , which simplify the use of physical and/or geometric priors ( Locatello et al. , 2020 ; Veerapaneni et al. , 2020 ) . They are of interest in robotics and reinforcement learning , where a physical agent has to interact with objects ( Kulkarni et al. , 2019 ; Manuelli et al. , 2020 ; 2019 ) . KeypointsNet ( Suwajanakorn et al. , 2018 ) is a geometric reasoning framework , which discovers meaningful keypoints in 3D through spatial coherence between viewpoints . Close to our work , ( Minderer et al. , 2019 ) proposes to learn a keypoints-based stochastic dynamic model . However , the model is not suited for CF reasoning in physics and may suffer from inconsistency in the prediction of dynamics over long horizons . 3 THE FILTERED-COPHY BENCHMARK We build on CoPhy ( Baradel et al. , 2020 ) , retaining its strengths , but explicitly focusing on a counterfactual scenario in pixel space and eliminating the ill-posedness of tasks we identified in the existing work . Each data sample is called an experiment , represented as a pair of trajectories : an observed one with initial conditionX0 = A and outcomeXt=1 .. T = B ( a sequence ) , and a counterfactual one X̄0 = C and X̄t=1 .. T = D ( a sequence ) . Throughout this paper we will use the letters A , B , C and D to distinguish the different parts of each experiment . The initial conditions A and C are linked through a do-operator do ( X0 = C ) , which modifies the initial condition ( Pearl , 2018 ) . Experiments are parameterized by a set of intrinsic physical parameters z which are not observable from a single initial image A . We refer to these as confounders . As in CoPhy , in our benchmark the do-operator is observed during training , but confounders are not — they have been used to generate the data , but are not used during training or testing . Following ( Pearl , 2018 ) , the counterfactual task consists in inferring the counterfactual outcome D given the observed trajectory AB and the counterfactual initial state C , following a three-step process : À Abduction : use the observed data AB to compute the counterfactual variables , i.e . physical parameters , which are not affected by the do-operation . Á Action : update the causal model ; keep the same identified confounders and apply the dooperator , i.e . replace the initial state A by C. Â Prediction : Compute the counterfactual outcome D using the causal graph . The benchmark contains three scenarios involving rigid body dynamics . BlocktowerCF studies stable and unstable 3D cube towers , the confounders are masses . BallsCF focuses on 2D collisions between moving spheres ( confounders are masses and initial velocities ) . CollisionCF is about collisions between a sphere and a cylinder ( confounders are masses and initial velocities ) ( Fig . 1 ) . Unlike CoPhy , our benchmark involves predictions in RGB pixel space only . The do-operation consists in visually observable interventions on A , such as moving or removing an object . The confounders can not be identified from the single-frame observation A , identification requires the analysis of the entire AB trajectory . Identifiability of confounders — For an experiment ( AB , CD , z ) to be well-posed , the confounders z must be retrievable from AB . For example , since the masses of a stable cube tower can not be identified generally in all situations , it can be impossible to predict the counterfactual outcome of an unstable tower , as collisions are not resolvable without known masses . In contrast to CoPhy , we ensure that each experiment ψ : ( X0 , z ) 7→ Xt=1 .. T , given initial condition X0 and confounders z , is well posed and satisfies the following constraints : Definition 1 ( Identifiability , ( Pearl , 2018 ) ) The experiment ( AB , CD , z ) is identifiable if , for any set of confounders z′ : ψ ( A , z ) = ψ ( A , z′ ) ⇒ ψ ( C , z ) = ψ ( C , z′ ) . ( 1 ) In an identifiable experiment there is no pair ( z , z′ ) that gives the same trajectory AB but different counterfactual outcomes CD . Details on implementation and impact are in appendix A.1 . Counterfactuality — We enforce sufficient difficulty of the problem through the meaningfulness of confounders . We remove initial situations where the choice of confounder values has no significant impact on the final outcome : Definition 2 ( Counterfactuality ) . Let zk be the set of confounders z , where the kth value has been modified . The experiment ( AB , CD , z ) is counterfactual if and only if : ∃k : ψ ( C , zk ) 6= ψ ( C , z ) . ( 2 ) In other words , we impose the existence of an object of the scene for which the ( unobserved ) physical properties have a determining effect on the trajectory . Details on how this constraint was enforced are given in appendix A.2 . Temporal resolution — the physical laws we target involve highly non-linear phenomena , in particular collision and resting contacts . Collisions are difficult to learn because their actions are both intense , brief , and highly non-linear , depending on the geometry of the objects in 3D space . The temporal resolution of physical simulations is of prime importance . A parallel can be made with Nyquist-Shannon frequency , as a trajectory sampled with too low frequency can not be reconstructed with precision . We simulate and record trajectories at 25 FPS , compared to 5 FPS chosen in CoPhy , justified with two experiments . Firstly , Fig . 2 shows the trajectories of the center of masses of cubes in BlocktowerCF , colored dots are shown at 25 FPS and black dots at 5 FPS . We can see that collisions with the ground fall below the sampling rate of 5 FPS , making it hard to infer physical laws from regularities in data at this frequency . A second experiment involves learning a prediction model at different frequencies , confirming the choice 25 FPS — details are given in appendix A.3 . | This work extends CoPhy, in which the author(s) address the problem of predicting counterfactual outcomes of physics-based tasks from pixel space (CoPhy used ground truth object positions). To do this, they learn a keypoint representation of the scene and use them to extract the confounders (such as velocities of objects, masses, etc). The author(s) also propose a new benchmark (built upon CoPhy's benchmark) for counterfactual prediction (after intervening the initial set of objects in the scene) which satisfies the Identifiability and the Counterfactuality constraints of causality. | SP:154f672ce4fdc8c020389c9d043c4c00f482c630 |
Filtered-CoPhy: Unsupervised Learning of Counterfactual Physics in Pixel Space | 1 INTRODUCTION . Reasoning on complex , multi-modal and high-dimensional data is a natural ability of humans and other intelligent agents ( Martin-Ordas et al. , 2008 ) , and one of the most important and difficult challenges of AI . While machine learning is well suited for capturing regularities in high-dimensional signals , in particular by using high-capacity deep networks , some applications also require an accurate modeling of causal relationships . This is particularly relevant in physics , where causation is considered as a fundamental axiom . In the context of machine learning , correctly capturing or modeling causal relationships can also lead to more robust predictions , in particular better generalization to out-of-distribution samples , indicating that a model has overcome the exploitation of biases and shortcuts in the training data . In recent literature on physics-inspired machine learning , causality has often been forced through the addition of prior knowledge about the physical laws that govern the studied phenomena , e.g . ( Yin et al. , 2021 ) . A similar idea lies behind structured causal models , widely used in the causal inference community , where domain experts model these relationships directly in a graphical notation . This particular line of work allows to perform predictions beyond statistical forecasting , for instance by predicting unobserved counterfactuals , the impact of unobserved interventions ( Balke & Pearl , 1994 ) — “ What alternative outcome would have happened , if the observed event X had been replaced with an event Y ( after an intervention ) ” . Counterfactuals are interesting , as causality intervenes through the effective modification of an outcome . As an example , taken from ( Schölkopf et al. , 2021 ) , an agent can identify the direction of a causal relationship between an umbrella and rain from the fact that removing an umbrella will not affect the weather . We focus on counterfactual reasoning on high-dimensional signals , in particular videos of complex physical processes . Learning such causal interactions from data is a challenging task , as spurious correlations are naturally and easily picked up by trained models . Previous work in this direction was restricted to discrete outcomes , as in CLEVRER ( Yi et al. , 2020 ) , or to the prediction of 3D trajectories , as in CoPhy ( Baradel et al. , 2020 ) , which also requires supervision of object positions . In this work , we address the hard problem of predicting the alternative ( counterfactual ) outcomes of physical processes in pixel space , i.e . we forecast sequences of 2D projective views of the 3D scene , requiring the prediction over long horizons ( 150 frames corresponding to ∼ 6 seconds ) . We conjecture that causal relationships can be modeled on a low dimensional manifold of the data , and propose a suitable latent representation for the causal model , in particular for the estimation of the confounders and the dynamic model itself . Similar to V-CDN ( Kulkarni et al. , 2019 ; Li et al. , 2020 ) , our latent representation is based on the unsupervised discovery of keypoints , complemented by additional information in our case . Indeed , while keypoint-based representations can easily be encoded from visual input , as stable mappings from images to points arise naturally , we claim that they are not the most suitable representation for dynamic models . We identified and addressed two principal problems : ( i ) the individual points of a given set are discriminated through their 2D positions only , therefore shape , geometry and relationships between multiple moving objects need to be encoded through the relative positions of points to each other , and ( ii ) the optimal representation for a physical dynamic model is not necessarily a 2D keypoint space , where the underlying object dynamics has also been subject to the imaging process ( projective geometry ) . We propose a new counterfactual model , which learns a sparse representation of visual input in the form of 2D keypoints coupled with a ( small ) set of coefficients per point modeling complementary shape and appearance information . Confounders ( object masses and initial velocities ) in the studied problem are extracted from this representation , and a learned dynamic model forecasts the entire trajectory of these keypoints from a single ( counterfactual ) observation . Building on recent work in data-driven analysis of dynamic systems ( Janny et al. , 2021 ; Peralez & Nadri , 2021 ) , the dynamic model is presented in a higher-dimensional state space , where dynamics are less complex . We show , that these design choices are key to the performance of our model , and that they significantly improve the capability to perform long-term predictions . Our proposed model outperforms strong baselines for physics-informed learning of video prediction . We introduce a new challenging dataset for this problem , which builds on CoPhy , a recent counterfactual physics benchmark ( Baradel et al. , 2020 ) . We go beyond the prediction of sequences of 3D positions and propose a counterfactual task for predictions in pixel space after interventions on initial conditions ( displacing , re-orienting or removing objects ) . In contrast to the literature , our benchmark also better controls for the identifiability of causal relationships and counterfactual variables and provides more accurate physics simulation . 2 RELATED WORK . Counterfactual ( CF ) reasoning — and learning of causal relationships in ML was made popular by works of J. Pearl , e.g . ( Pearl , 2000 ) , which motivate and introduce mathematical tools detailing the principles of do-calculus , i.e . study of unobserved interventions on data . A more recent survey links these concepts to the literature in ML ( Schölkopf et al. , 2021 ) . The last years have seen the emergence of several benchmarks for CF reasoning in physics . CLEVRER ( Yi et al. , 2020 ) is a visual question answering dataset , where an agent is required to answer a CF question after observing a video showing 3D objects moving and colliding . Li et al . ( 2020 ) introduce a CF benchmark with two tasks : a scenario where balls interact with each other according to unknown interaction laws ( such as gravity or elasticity ) , and a scenario where clothes are folded by the wind . The agent needs to identify CF variables and causal relationships between objects , and to predict future frames . CoPhy ( Baradel et al. , 2020 ) clearly dissociates the observed experiment from the CF one , and contains three complex 3D scenarios involving rigid body dynamics . However , the proposed method relies on the supervision of 3D object positions , while our work does not require any meta data . Physics-inspired ML — and learning visual dynamics has been dealt early on with recurrent models ( Srivastava et al. , 2015 ; Finn et al. , 2016 ; Lu et al. , 2017 ) , or GANs ( Vondrick et al. , 2016 ; Mathieu et al. , 2016 ) . Kwon & Park ( 2019 ) adopt a Cycle-GAN with two discriminator heads , in charge of identifying false images and false sequences in order to improve the temporal consistency of the model in long term prediction . Nonetheless , the integration of causal reasoning and prior knowledge in these models is not straightforward . Typical work in physics-informed models relies on disentanglement between physics-informed features and residual features ( Villegas et al. , 2017a ; Denton & Birodkar , 2017 ) and may incorporate additional information based on the available priors on the scene ( Villegas et al. , 2017b ; Walker et al. , 2017 ) . PhyDNet Le Guen & Thome ( 2020 ) explicitly disentangles visual features from dynamical features , which are supposed to follow a PDE . It achieves SOTA performance on Human3.6M ( Ionescu et al. , 2014 ) and Sea Surface Temperature ( de Bezenac et al. , 2018 ) , but we show that it fails on our challenging benchmark . Keypoint detection — is a well researched problem in vision with widely used handcrafted baselines ( Lowe , 1999 ) . New unsupervised variants emerged recently and have been shown to provide a suitable object-centric representation , close to attention models , which simplify the use of physical and/or geometric priors ( Locatello et al. , 2020 ; Veerapaneni et al. , 2020 ) . They are of interest in robotics and reinforcement learning , where a physical agent has to interact with objects ( Kulkarni et al. , 2019 ; Manuelli et al. , 2020 ; 2019 ) . KeypointsNet ( Suwajanakorn et al. , 2018 ) is a geometric reasoning framework , which discovers meaningful keypoints in 3D through spatial coherence between viewpoints . Close to our work , ( Minderer et al. , 2019 ) proposes to learn a keypoints-based stochastic dynamic model . However , the model is not suited for CF reasoning in physics and may suffer from inconsistency in the prediction of dynamics over long horizons . 3 THE FILTERED-COPHY BENCHMARK We build on CoPhy ( Baradel et al. , 2020 ) , retaining its strengths , but explicitly focusing on a counterfactual scenario in pixel space and eliminating the ill-posedness of tasks we identified in the existing work . Each data sample is called an experiment , represented as a pair of trajectories : an observed one with initial conditionX0 = A and outcomeXt=1 .. T = B ( a sequence ) , and a counterfactual one X̄0 = C and X̄t=1 .. T = D ( a sequence ) . Throughout this paper we will use the letters A , B , C and D to distinguish the different parts of each experiment . The initial conditions A and C are linked through a do-operator do ( X0 = C ) , which modifies the initial condition ( Pearl , 2018 ) . Experiments are parameterized by a set of intrinsic physical parameters z which are not observable from a single initial image A . We refer to these as confounders . As in CoPhy , in our benchmark the do-operator is observed during training , but confounders are not — they have been used to generate the data , but are not used during training or testing . Following ( Pearl , 2018 ) , the counterfactual task consists in inferring the counterfactual outcome D given the observed trajectory AB and the counterfactual initial state C , following a three-step process : À Abduction : use the observed data AB to compute the counterfactual variables , i.e . physical parameters , which are not affected by the do-operation . Á Action : update the causal model ; keep the same identified confounders and apply the dooperator , i.e . replace the initial state A by C. Â Prediction : Compute the counterfactual outcome D using the causal graph . The benchmark contains three scenarios involving rigid body dynamics . BlocktowerCF studies stable and unstable 3D cube towers , the confounders are masses . BallsCF focuses on 2D collisions between moving spheres ( confounders are masses and initial velocities ) . CollisionCF is about collisions between a sphere and a cylinder ( confounders are masses and initial velocities ) ( Fig . 1 ) . Unlike CoPhy , our benchmark involves predictions in RGB pixel space only . The do-operation consists in visually observable interventions on A , such as moving or removing an object . The confounders can not be identified from the single-frame observation A , identification requires the analysis of the entire AB trajectory . Identifiability of confounders — For an experiment ( AB , CD , z ) to be well-posed , the confounders z must be retrievable from AB . For example , since the masses of a stable cube tower can not be identified generally in all situations , it can be impossible to predict the counterfactual outcome of an unstable tower , as collisions are not resolvable without known masses . In contrast to CoPhy , we ensure that each experiment ψ : ( X0 , z ) 7→ Xt=1 .. T , given initial condition X0 and confounders z , is well posed and satisfies the following constraints : Definition 1 ( Identifiability , ( Pearl , 2018 ) ) The experiment ( AB , CD , z ) is identifiable if , for any set of confounders z′ : ψ ( A , z ) = ψ ( A , z′ ) ⇒ ψ ( C , z ) = ψ ( C , z′ ) . ( 1 ) In an identifiable experiment there is no pair ( z , z′ ) that gives the same trajectory AB but different counterfactual outcomes CD . Details on implementation and impact are in appendix A.1 . Counterfactuality — We enforce sufficient difficulty of the problem through the meaningfulness of confounders . We remove initial situations where the choice of confounder values has no significant impact on the final outcome : Definition 2 ( Counterfactuality ) . Let zk be the set of confounders z , where the kth value has been modified . The experiment ( AB , CD , z ) is counterfactual if and only if : ∃k : ψ ( C , zk ) 6= ψ ( C , z ) . ( 2 ) In other words , we impose the existence of an object of the scene for which the ( unobserved ) physical properties have a determining effect on the trajectory . Details on how this constraint was enforced are given in appendix A.2 . Temporal resolution — the physical laws we target involve highly non-linear phenomena , in particular collision and resting contacts . Collisions are difficult to learn because their actions are both intense , brief , and highly non-linear , depending on the geometry of the objects in 3D space . The temporal resolution of physical simulations is of prime importance . A parallel can be made with Nyquist-Shannon frequency , as a trajectory sampled with too low frequency can not be reconstructed with precision . We simulate and record trajectories at 25 FPS , compared to 5 FPS chosen in CoPhy , justified with two experiments . Firstly , Fig . 2 shows the trajectories of the center of masses of cubes in BlocktowerCF , colored dots are shown at 25 FPS and black dots at 5 FPS . We can see that collisions with the ground fall below the sampling rate of 5 FPS , making it hard to infer physical laws from regularities in data at this frequency . A second experiment involves learning a prediction model at different frequencies , confirming the choice 25 FPS — details are given in appendix A.3 . | The paper introduces a testing approach, dataset, and method for counterfactual video predictions, using 3d physics simulation videos. Importantly, the approach predicts directly from pixel space rather than requiring spoon-fed keypoints. It employs a counterfactual approach to establish a network's capacity to learn causal relations. Separating the problem into one of parsing the inputs into keypoints and additional coefficients, inferring object attributes, and sequential prediction from an input frame + object attributes, it proposes an architecture which is based on combining modules for each of these learning tasks (but where keypoints are learned in an unsupervised way). Besides this key architectural innovation, it adds an inductive bias by applying directional Gaussian filters to the keypoint maps. The paper checks that the network actually works as intended by empirically examining the effect of changing object coefficients. Three other ablation analyses, one that combines two of the modules (rather than separating them via stop-grad) and one that removes coefficients, and one comparing the handcrafted filter bank with learnable ones, add confidence to the approach. In the supplement, the paper also analyzes in detail how the method compares to a previous method (Transporter). It uses sensible benchmarks including previous methods and common-sense baselines where either the original or counterfactual input is used as a prediction, and does relatively well on this challenging problem, although the video on the website shows that there is still plenty of room for improvement. | SP:154f672ce4fdc8c020389c9d043c4c00f482c630 |
Diffusion-Based Voice Conversion with Fast Maximum Likelihood Sampling Scheme | 1 INTRODUCTION . Voice conversion ( VC ) is the task of copying the target speaker ’ s voice while preserving the linguistic content of the utterance pronounced by the source speaker . Practical VC applications often require a model which is able to operate in one-shot mode ( i.e . when only one reference utterance is provided to copy the target speaker ’ s voice ) for any source and target speakers . Such models are usually referred to as one-shot many-to-many models ( or sometimes zero-shot many-to-many models , or just any-to-any VC models ) . It is challenging to build such a model since it should be able to adapt to a new unseen voice having only one spoken utterance pronounced with it , so it was not until recently that successful one-shot VC solutions started to appear . Conventional one-shot VC models are designed as autoencoders whose latent space ideally contains only the linguistic content of the encoded utterance while target voice identity information ( usually taking shape of speaker embedding ) is fed to the decoder as conditioning . Whereas in the pioneering AutoVC model ( Qian et al. , 2019 ) only speaker embedding from the pre-trained speaker verification network was used as conditioning , several other models improved on AutoVC enriching conditioning with phonetic features such as pitch and loudness ( Qian et al. , 2020 ; Nercessian , 2020 ) , or training voice conversion and speaker embedding networks jointly ( Chou & Lee , 2019 ) . Also , several papers ( Lin et al. , 2021 ; Ishihara & Saito , 2020 ; Liu et al. , 2021b ) made use of attention mechanism to better fuse specific features of the reference utterance into the source utterance thus improving the decoder performance . Apart from providing the decoder with sufficiently rich information , one of the main problems autoencoder VC models face is to disentangle source speaker identity from speech content in the encoder . Some models ( Qian et al. , 2019 ; 2020 ; Nercessian , 2020 ) solve this problem by introducing an information bottleneck . Among other popular solutions of the disentanglement problem one can mention applying vector quantization technique to the content information ( Wu et al. , 2020 ; Wang et al. , 2021 ) , utilizing features of Variational AutoEncoders ( Luong & Tran , 2021 ; Saito et al. , 2018 ; Chou & Lee , 2019 ) , introducing instance normalization layers ( Chou & Lee , 2019 ; Chen et al. , 2021b ) , and using Phonetic Posteriorgrams ( PPGs ) ( Nercessian , 2020 ; Liu et al. , 2021b ) . The model we propose in this paper solves the disentanglement problem by employing the encoder predicting “ average voice ” : it is trained to transform mel features corresponding to each phoneme into mel features corresponding to this phoneme averaged across a large multi-speaker dataset . As for decoder , in our VC model , it is designed as a part of a Diffusion Probabilistic Model ( DPM ) since this class of generative models has shown very good results in speech-related tasks like raw waveform generation ( Chen et al. , 2021a ; Kong et al. , 2021 ) and mel feature generation ( Popov et al. , 2021 ; Jeong et al. , 2021 ) . However , this decoder choice poses a problem of slow inference because DPM forward pass scheme is iterative and to obtain high-quality results it is typically necessary to run it for hundreds of iterations ( Ho et al. , 2020 ; Nichol & Dhariwal , 2021 ) . Addressing this issue , we develop a novel inference scheme that significantly reduces the number of iterations sufficient to produce samples of decent quality and does not require model re-training . Although several attempts have been recently made to reduce the number of DPM inference steps ( Song et al. , 2021a ; San-Roman et al. , 2021 ; Watson et al. , 2021 ; Kong & Ping , 2021 ; Chen et al. , 2021a ) , most of them apply to some particular types of DPMs . In contrast , our approach generalizes to all popular kinds of DPMs and has a strong connection with likelihood maximization . This paper has the following structure : in Section 2 we present a one-shot many-to-many VC model and describe DPM it relies on ; Section 3 introduces a novel DPM sampling scheme and establishes its connection with likelihood maximization ; the experiments regarding voice conversion task as well as those demonstrating the benefits of the proposed sampling scheme are described in Section 4 ; we conclude in Section 5 . 2 VOICE CONVERSION DIFFUSION MODEL . As with many other VC models , the one we propose belongs to the family of autoencoders . In fact , any conditional DPM with data-dependent prior ( i.e . terminal distribution of forward diffusion ) can be seen as such : forward diffusion gradually adding Gaussian noise to data can be regarded as encoder while reverse diffusion trying to remove this noise acts as a decoder . DPMs are trained to minimize the distance ( expressed in different terms for different model types ) between the trajectories of forward and reverse diffusion processes thus , speaking from the perspective of autoencoders , minimizing reconstruction error . Data-dependent priors have been proposed by Popov et al . ( 2021 ) and Lee et al . ( 2021 ) , and we follow the former paper due to the flexibility of the continuous DPM framework used there . Our approach is summarized in Figure 1 . 2.1 ENCODER . We choose average phoneme-level mel features as speaker-independent speech representation . To train the encoder to convert input mel-spectrograms into those of “ average voice ” , we take three steps : ( i ) first , we apply Montreal Forced Aligner ( McAuliffe et al. , 2017 ) to a large-scale multi-speaker LibriTTS dataset ( Zen et al. , 2019 ) to align speech frames with phonemes ; ( ii ) next , we obtain average mel features for each particular phoneme by aggregating its mel features across the whole LibriTTS dataset ; ( iii ) the encoder is then trained to minimize mean square error between output mel-spectrograms and ground truth “ average voice ” mel-spectrograms ( i.e . input mel-spectrograms where each phoneme mel feature is replaced with the average one calculated on the previous step ) . The encoder has exactly the same Transformer-based architecture used in Grad-TTS ( Popov et al. , 2021 ) except that its inputs are mel features rather than character or phoneme embeddings . Note that unlike Grad-TTS the encoder is trained separately from the decoder described in the next section . 2.2 DECODER . Whereas the encoder parameterizes the terminal distribution of the forward diffusion ( i.e . the prior ) , the reverse diffusion is parameterized with the decoder . Following Song et al . ( 2021c ) we use Itô calculus and define diffusions in terms of stochastic processes rather than discrete-time Markov chains . The general DPM framework we utilize consists of forward and reverse diffusions given by the following Stochastic Differential Equations ( SDEs ) : dXt = 1 2 βt ( X̄ −Xt ) dt+ √ βtd −→ Wt , ( 1 ) dX̂t = ( 1 2 ( X̄ − X̂t ) − sθ ( X̂t , X̄ , t ) ) βtdt+ √ βtd ←− Wt , ( 2 ) where t ∈ [ 0 , 1 ] , −→ W and ←− W are two independent Wiener processes in Rn , βt is non-negative function referred to as noise schedule , sθ is the score function with parameters θ and X̄ is n-dimensional vector . It can be shown ( Popov et al. , 2021 ) that the forward SDE ( 1 ) allows for explicit solution : Law ( Xt|X0 ) = N ( e− 1 2 ∫ t 0 βsdsX0 + ( 1− e− 12 ∫ t 0 βsds ) X̄ , ( 1− e− ∫ t 0 βsds ) I ) , ( 3 ) where I is n× n identity matrix . Thus , if noise follows linear schedule βt = β0 + t ( β1 − β0 ) for β0 and β1 such that e− ∫ 1 0 βsds is close to zero , then Law ( X1 ) is close to N ( X̄ , I ) which is the prior in this DPM . The reverse diffusion ( 2 ) is trained by minimizing weighted L2 loss : θ∗ = arg min θ L ( θ ) = arg min θ ∫ 1 0 λtEX0 , Xt‖sθ ( Xt , X̄ , t ) −∇log pt|0 ( Xt|X0 ) ‖22dt , ( 4 ) where pt|0 ( Xt|X0 ) is the probability density function ( pdf ) of the conditional distribution ( 3 ) and λt = 1− e− ∫ t 0 βsds . The distribution ( 3 ) is Gaussian , so we have ∇ log pt|0 ( Xt|X0 ) = − Xt −X0e− 1 2 ∫ t 0 βsds − X̄ ( 1− e− 12 ∫ t 0 βsds ) 1− e− ∫ t 0 βsds . ( 5 ) At training , time variable t is sampled uniformly from [ 0 , 1 ] , noisy samplesXt are generated according to the formula ( 3 ) and the formula ( 5 ) is used to calculate loss function L on these samples . Note that Xt can be sampled without the necessity to calculate intermediate values { Xs } 0 < s < t which makes optimization task ( 4 ) time and memory efficient . A well-trained reverse diffusion ( 2 ) has trajectories that are close to those of the forward diffusion ( 1 ) , so generating data with this DPM can be performed by sampling X̂1 from the prior N ( X̄ , I ) and solving SDE ( 2 ) backwards in time . The described above DPM was introduced by Popov et al . ( 2021 ) for text-to-speech task and we adapt it for our purposes . We put X̄ = ϕ ( X0 ) where ϕ is the encoder , i.e . X̄ is the “ average voice ” mel-spectrogram which we want to transform into that of the target voice . We condition the decoder sθ = sθ ( X̂t , X̄ , gt ( Y ) , t ) on some trainable function gt ( Y ) to provide it with information about the target speaker ( Y stands for forward trajectories of the target mel-spectrogram at inference and the ones of the training mel-spectrogram at training ) . This function is a neural network trained jointly with the decoder . We experimented with three input types for this network : • d-only – the input is the speaker embedding extracted from the target mel-spectrogram Y0 with the pre-trained speaker verification network employed in ( Jia et al. , 2018 ) ; • wodyn – in addition , the noisy target mel-spectrogram Yt is used as input ; • whole – in addition , the whole dynamics of the target mel-spectrogram under forward diffusion { Ys|s = 0.5/15 , 1.5/15 , .. , 14.5/15 } is used as input . The decoder architecture is based on U-Net ( Ronneberger et al. , 2015 ) and is the same as in GradTTS but with four times more channels to better capture the whole range of human voices . The speaker conditioning network gt ( Y ) is composed of 2D convolutions and MLPs and described in detail in Appendix H. Its output is 128-dimensional vector which is broadcast-concatenated to the concatenation of X̂t and X̄ as additional 128 channels . | This paper proposes a method to perform voice conversion through recent methods in Diffusion Based Probability Modeling (DPM) through Scochastic Differential Equations (SDE), building upon recent work Grad-TTS and Glow-TTS. The architecture is an encoder-decoder setup, trained separately, described in upcoming paragraphs. Viewed at a high level, the encoder maps input to 'noise', and the decoder inverts noise to output, in keeping with the SDE formalism. To transform average features to output voice, speaker info is conditioned on the average voice features before feeding to decoder. The encoder learns to map input voice features to an average mel spectrogram. In order to do this, the Montreal Forced Aligner is used to align mel frames to phonemes. The average output voice (for encoder training) is obtained by aligning and averaging out corresponding phonemes in the dataset. The main novelty in this paper is claimed to be in the decoder setup solving the reverse SDE. The authors derive a modified SDE that does MLE inference on the original reverse SDE. Through this approach, the claims are that the likelihoods are better, and its estimation is 'faster' (ostensibly, owing to getting around step size limitations). Evaluations are carried out for VCTK and LibriTTS, and quite well done, and the samples provided show that the approach works. Comparisons are made against several other approaches, include subjective evaluations and FID scores, and look for similarity and naturalness metrics. They also compare against different flavors of SDE setups (Variance Exploding, Preserving, etc.). | SP:b0ee9546d10f3d86e90ae5a873a4eb524b1fee48 |
Diffusion-Based Voice Conversion with Fast Maximum Likelihood Sampling Scheme | 1 INTRODUCTION . Voice conversion ( VC ) is the task of copying the target speaker ’ s voice while preserving the linguistic content of the utterance pronounced by the source speaker . Practical VC applications often require a model which is able to operate in one-shot mode ( i.e . when only one reference utterance is provided to copy the target speaker ’ s voice ) for any source and target speakers . Such models are usually referred to as one-shot many-to-many models ( or sometimes zero-shot many-to-many models , or just any-to-any VC models ) . It is challenging to build such a model since it should be able to adapt to a new unseen voice having only one spoken utterance pronounced with it , so it was not until recently that successful one-shot VC solutions started to appear . Conventional one-shot VC models are designed as autoencoders whose latent space ideally contains only the linguistic content of the encoded utterance while target voice identity information ( usually taking shape of speaker embedding ) is fed to the decoder as conditioning . Whereas in the pioneering AutoVC model ( Qian et al. , 2019 ) only speaker embedding from the pre-trained speaker verification network was used as conditioning , several other models improved on AutoVC enriching conditioning with phonetic features such as pitch and loudness ( Qian et al. , 2020 ; Nercessian , 2020 ) , or training voice conversion and speaker embedding networks jointly ( Chou & Lee , 2019 ) . Also , several papers ( Lin et al. , 2021 ; Ishihara & Saito , 2020 ; Liu et al. , 2021b ) made use of attention mechanism to better fuse specific features of the reference utterance into the source utterance thus improving the decoder performance . Apart from providing the decoder with sufficiently rich information , one of the main problems autoencoder VC models face is to disentangle source speaker identity from speech content in the encoder . Some models ( Qian et al. , 2019 ; 2020 ; Nercessian , 2020 ) solve this problem by introducing an information bottleneck . Among other popular solutions of the disentanglement problem one can mention applying vector quantization technique to the content information ( Wu et al. , 2020 ; Wang et al. , 2021 ) , utilizing features of Variational AutoEncoders ( Luong & Tran , 2021 ; Saito et al. , 2018 ; Chou & Lee , 2019 ) , introducing instance normalization layers ( Chou & Lee , 2019 ; Chen et al. , 2021b ) , and using Phonetic Posteriorgrams ( PPGs ) ( Nercessian , 2020 ; Liu et al. , 2021b ) . The model we propose in this paper solves the disentanglement problem by employing the encoder predicting “ average voice ” : it is trained to transform mel features corresponding to each phoneme into mel features corresponding to this phoneme averaged across a large multi-speaker dataset . As for decoder , in our VC model , it is designed as a part of a Diffusion Probabilistic Model ( DPM ) since this class of generative models has shown very good results in speech-related tasks like raw waveform generation ( Chen et al. , 2021a ; Kong et al. , 2021 ) and mel feature generation ( Popov et al. , 2021 ; Jeong et al. , 2021 ) . However , this decoder choice poses a problem of slow inference because DPM forward pass scheme is iterative and to obtain high-quality results it is typically necessary to run it for hundreds of iterations ( Ho et al. , 2020 ; Nichol & Dhariwal , 2021 ) . Addressing this issue , we develop a novel inference scheme that significantly reduces the number of iterations sufficient to produce samples of decent quality and does not require model re-training . Although several attempts have been recently made to reduce the number of DPM inference steps ( Song et al. , 2021a ; San-Roman et al. , 2021 ; Watson et al. , 2021 ; Kong & Ping , 2021 ; Chen et al. , 2021a ) , most of them apply to some particular types of DPMs . In contrast , our approach generalizes to all popular kinds of DPMs and has a strong connection with likelihood maximization . This paper has the following structure : in Section 2 we present a one-shot many-to-many VC model and describe DPM it relies on ; Section 3 introduces a novel DPM sampling scheme and establishes its connection with likelihood maximization ; the experiments regarding voice conversion task as well as those demonstrating the benefits of the proposed sampling scheme are described in Section 4 ; we conclude in Section 5 . 2 VOICE CONVERSION DIFFUSION MODEL . As with many other VC models , the one we propose belongs to the family of autoencoders . In fact , any conditional DPM with data-dependent prior ( i.e . terminal distribution of forward diffusion ) can be seen as such : forward diffusion gradually adding Gaussian noise to data can be regarded as encoder while reverse diffusion trying to remove this noise acts as a decoder . DPMs are trained to minimize the distance ( expressed in different terms for different model types ) between the trajectories of forward and reverse diffusion processes thus , speaking from the perspective of autoencoders , minimizing reconstruction error . Data-dependent priors have been proposed by Popov et al . ( 2021 ) and Lee et al . ( 2021 ) , and we follow the former paper due to the flexibility of the continuous DPM framework used there . Our approach is summarized in Figure 1 . 2.1 ENCODER . We choose average phoneme-level mel features as speaker-independent speech representation . To train the encoder to convert input mel-spectrograms into those of “ average voice ” , we take three steps : ( i ) first , we apply Montreal Forced Aligner ( McAuliffe et al. , 2017 ) to a large-scale multi-speaker LibriTTS dataset ( Zen et al. , 2019 ) to align speech frames with phonemes ; ( ii ) next , we obtain average mel features for each particular phoneme by aggregating its mel features across the whole LibriTTS dataset ; ( iii ) the encoder is then trained to minimize mean square error between output mel-spectrograms and ground truth “ average voice ” mel-spectrograms ( i.e . input mel-spectrograms where each phoneme mel feature is replaced with the average one calculated on the previous step ) . The encoder has exactly the same Transformer-based architecture used in Grad-TTS ( Popov et al. , 2021 ) except that its inputs are mel features rather than character or phoneme embeddings . Note that unlike Grad-TTS the encoder is trained separately from the decoder described in the next section . 2.2 DECODER . Whereas the encoder parameterizes the terminal distribution of the forward diffusion ( i.e . the prior ) , the reverse diffusion is parameterized with the decoder . Following Song et al . ( 2021c ) we use Itô calculus and define diffusions in terms of stochastic processes rather than discrete-time Markov chains . The general DPM framework we utilize consists of forward and reverse diffusions given by the following Stochastic Differential Equations ( SDEs ) : dXt = 1 2 βt ( X̄ −Xt ) dt+ √ βtd −→ Wt , ( 1 ) dX̂t = ( 1 2 ( X̄ − X̂t ) − sθ ( X̂t , X̄ , t ) ) βtdt+ √ βtd ←− Wt , ( 2 ) where t ∈ [ 0 , 1 ] , −→ W and ←− W are two independent Wiener processes in Rn , βt is non-negative function referred to as noise schedule , sθ is the score function with parameters θ and X̄ is n-dimensional vector . It can be shown ( Popov et al. , 2021 ) that the forward SDE ( 1 ) allows for explicit solution : Law ( Xt|X0 ) = N ( e− 1 2 ∫ t 0 βsdsX0 + ( 1− e− 12 ∫ t 0 βsds ) X̄ , ( 1− e− ∫ t 0 βsds ) I ) , ( 3 ) where I is n× n identity matrix . Thus , if noise follows linear schedule βt = β0 + t ( β1 − β0 ) for β0 and β1 such that e− ∫ 1 0 βsds is close to zero , then Law ( X1 ) is close to N ( X̄ , I ) which is the prior in this DPM . The reverse diffusion ( 2 ) is trained by minimizing weighted L2 loss : θ∗ = arg min θ L ( θ ) = arg min θ ∫ 1 0 λtEX0 , Xt‖sθ ( Xt , X̄ , t ) −∇log pt|0 ( Xt|X0 ) ‖22dt , ( 4 ) where pt|0 ( Xt|X0 ) is the probability density function ( pdf ) of the conditional distribution ( 3 ) and λt = 1− e− ∫ t 0 βsds . The distribution ( 3 ) is Gaussian , so we have ∇ log pt|0 ( Xt|X0 ) = − Xt −X0e− 1 2 ∫ t 0 βsds − X̄ ( 1− e− 12 ∫ t 0 βsds ) 1− e− ∫ t 0 βsds . ( 5 ) At training , time variable t is sampled uniformly from [ 0 , 1 ] , noisy samplesXt are generated according to the formula ( 3 ) and the formula ( 5 ) is used to calculate loss function L on these samples . Note that Xt can be sampled without the necessity to calculate intermediate values { Xs } 0 < s < t which makes optimization task ( 4 ) time and memory efficient . A well-trained reverse diffusion ( 2 ) has trajectories that are close to those of the forward diffusion ( 1 ) , so generating data with this DPM can be performed by sampling X̂1 from the prior N ( X̄ , I ) and solving SDE ( 2 ) backwards in time . The described above DPM was introduced by Popov et al . ( 2021 ) for text-to-speech task and we adapt it for our purposes . We put X̄ = ϕ ( X0 ) where ϕ is the encoder , i.e . X̄ is the “ average voice ” mel-spectrogram which we want to transform into that of the target voice . We condition the decoder sθ = sθ ( X̂t , X̄ , gt ( Y ) , t ) on some trainable function gt ( Y ) to provide it with information about the target speaker ( Y stands for forward trajectories of the target mel-spectrogram at inference and the ones of the training mel-spectrogram at training ) . This function is a neural network trained jointly with the decoder . We experimented with three input types for this network : • d-only – the input is the speaker embedding extracted from the target mel-spectrogram Y0 with the pre-trained speaker verification network employed in ( Jia et al. , 2018 ) ; • wodyn – in addition , the noisy target mel-spectrogram Yt is used as input ; • whole – in addition , the whole dynamics of the target mel-spectrogram under forward diffusion { Ys|s = 0.5/15 , 1.5/15 , .. , 14.5/15 } is used as input . The decoder architecture is based on U-Net ( Ronneberger et al. , 2015 ) and is the same as in GradTTS but with four times more channels to better capture the whole range of human voices . The speaker conditioning network gt ( Y ) is composed of 2D convolutions and MLPs and described in detail in Appendix H. Its output is 128-dimensional vector which is broadcast-concatenated to the concatenation of X̂t and X̄ as additional 128 channels . | The paper proposes a diffusion probabilistic model-based voice conversion method for one-shot voice conversion scenario. The proposed method can generate high-quality converted speech compared to state-of-the-art approaches. Furthermore, to improve the real-time factor, a novel stochastic differential equations solver is proposed which makes the diffusion model faster. The proposed solver is also suitable for other generative tasks. | SP:b0ee9546d10f3d86e90ae5a873a4eb524b1fee48 |
Diffusion-Based Voice Conversion with Fast Maximum Likelihood Sampling Scheme | 1 INTRODUCTION . Voice conversion ( VC ) is the task of copying the target speaker ’ s voice while preserving the linguistic content of the utterance pronounced by the source speaker . Practical VC applications often require a model which is able to operate in one-shot mode ( i.e . when only one reference utterance is provided to copy the target speaker ’ s voice ) for any source and target speakers . Such models are usually referred to as one-shot many-to-many models ( or sometimes zero-shot many-to-many models , or just any-to-any VC models ) . It is challenging to build such a model since it should be able to adapt to a new unseen voice having only one spoken utterance pronounced with it , so it was not until recently that successful one-shot VC solutions started to appear . Conventional one-shot VC models are designed as autoencoders whose latent space ideally contains only the linguistic content of the encoded utterance while target voice identity information ( usually taking shape of speaker embedding ) is fed to the decoder as conditioning . Whereas in the pioneering AutoVC model ( Qian et al. , 2019 ) only speaker embedding from the pre-trained speaker verification network was used as conditioning , several other models improved on AutoVC enriching conditioning with phonetic features such as pitch and loudness ( Qian et al. , 2020 ; Nercessian , 2020 ) , or training voice conversion and speaker embedding networks jointly ( Chou & Lee , 2019 ) . Also , several papers ( Lin et al. , 2021 ; Ishihara & Saito , 2020 ; Liu et al. , 2021b ) made use of attention mechanism to better fuse specific features of the reference utterance into the source utterance thus improving the decoder performance . Apart from providing the decoder with sufficiently rich information , one of the main problems autoencoder VC models face is to disentangle source speaker identity from speech content in the encoder . Some models ( Qian et al. , 2019 ; 2020 ; Nercessian , 2020 ) solve this problem by introducing an information bottleneck . Among other popular solutions of the disentanglement problem one can mention applying vector quantization technique to the content information ( Wu et al. , 2020 ; Wang et al. , 2021 ) , utilizing features of Variational AutoEncoders ( Luong & Tran , 2021 ; Saito et al. , 2018 ; Chou & Lee , 2019 ) , introducing instance normalization layers ( Chou & Lee , 2019 ; Chen et al. , 2021b ) , and using Phonetic Posteriorgrams ( PPGs ) ( Nercessian , 2020 ; Liu et al. , 2021b ) . The model we propose in this paper solves the disentanglement problem by employing the encoder predicting “ average voice ” : it is trained to transform mel features corresponding to each phoneme into mel features corresponding to this phoneme averaged across a large multi-speaker dataset . As for decoder , in our VC model , it is designed as a part of a Diffusion Probabilistic Model ( DPM ) since this class of generative models has shown very good results in speech-related tasks like raw waveform generation ( Chen et al. , 2021a ; Kong et al. , 2021 ) and mel feature generation ( Popov et al. , 2021 ; Jeong et al. , 2021 ) . However , this decoder choice poses a problem of slow inference because DPM forward pass scheme is iterative and to obtain high-quality results it is typically necessary to run it for hundreds of iterations ( Ho et al. , 2020 ; Nichol & Dhariwal , 2021 ) . Addressing this issue , we develop a novel inference scheme that significantly reduces the number of iterations sufficient to produce samples of decent quality and does not require model re-training . Although several attempts have been recently made to reduce the number of DPM inference steps ( Song et al. , 2021a ; San-Roman et al. , 2021 ; Watson et al. , 2021 ; Kong & Ping , 2021 ; Chen et al. , 2021a ) , most of them apply to some particular types of DPMs . In contrast , our approach generalizes to all popular kinds of DPMs and has a strong connection with likelihood maximization . This paper has the following structure : in Section 2 we present a one-shot many-to-many VC model and describe DPM it relies on ; Section 3 introduces a novel DPM sampling scheme and establishes its connection with likelihood maximization ; the experiments regarding voice conversion task as well as those demonstrating the benefits of the proposed sampling scheme are described in Section 4 ; we conclude in Section 5 . 2 VOICE CONVERSION DIFFUSION MODEL . As with many other VC models , the one we propose belongs to the family of autoencoders . In fact , any conditional DPM with data-dependent prior ( i.e . terminal distribution of forward diffusion ) can be seen as such : forward diffusion gradually adding Gaussian noise to data can be regarded as encoder while reverse diffusion trying to remove this noise acts as a decoder . DPMs are trained to minimize the distance ( expressed in different terms for different model types ) between the trajectories of forward and reverse diffusion processes thus , speaking from the perspective of autoencoders , minimizing reconstruction error . Data-dependent priors have been proposed by Popov et al . ( 2021 ) and Lee et al . ( 2021 ) , and we follow the former paper due to the flexibility of the continuous DPM framework used there . Our approach is summarized in Figure 1 . 2.1 ENCODER . We choose average phoneme-level mel features as speaker-independent speech representation . To train the encoder to convert input mel-spectrograms into those of “ average voice ” , we take three steps : ( i ) first , we apply Montreal Forced Aligner ( McAuliffe et al. , 2017 ) to a large-scale multi-speaker LibriTTS dataset ( Zen et al. , 2019 ) to align speech frames with phonemes ; ( ii ) next , we obtain average mel features for each particular phoneme by aggregating its mel features across the whole LibriTTS dataset ; ( iii ) the encoder is then trained to minimize mean square error between output mel-spectrograms and ground truth “ average voice ” mel-spectrograms ( i.e . input mel-spectrograms where each phoneme mel feature is replaced with the average one calculated on the previous step ) . The encoder has exactly the same Transformer-based architecture used in Grad-TTS ( Popov et al. , 2021 ) except that its inputs are mel features rather than character or phoneme embeddings . Note that unlike Grad-TTS the encoder is trained separately from the decoder described in the next section . 2.2 DECODER . Whereas the encoder parameterizes the terminal distribution of the forward diffusion ( i.e . the prior ) , the reverse diffusion is parameterized with the decoder . Following Song et al . ( 2021c ) we use Itô calculus and define diffusions in terms of stochastic processes rather than discrete-time Markov chains . The general DPM framework we utilize consists of forward and reverse diffusions given by the following Stochastic Differential Equations ( SDEs ) : dXt = 1 2 βt ( X̄ −Xt ) dt+ √ βtd −→ Wt , ( 1 ) dX̂t = ( 1 2 ( X̄ − X̂t ) − sθ ( X̂t , X̄ , t ) ) βtdt+ √ βtd ←− Wt , ( 2 ) where t ∈ [ 0 , 1 ] , −→ W and ←− W are two independent Wiener processes in Rn , βt is non-negative function referred to as noise schedule , sθ is the score function with parameters θ and X̄ is n-dimensional vector . It can be shown ( Popov et al. , 2021 ) that the forward SDE ( 1 ) allows for explicit solution : Law ( Xt|X0 ) = N ( e− 1 2 ∫ t 0 βsdsX0 + ( 1− e− 12 ∫ t 0 βsds ) X̄ , ( 1− e− ∫ t 0 βsds ) I ) , ( 3 ) where I is n× n identity matrix . Thus , if noise follows linear schedule βt = β0 + t ( β1 − β0 ) for β0 and β1 such that e− ∫ 1 0 βsds is close to zero , then Law ( X1 ) is close to N ( X̄ , I ) which is the prior in this DPM . The reverse diffusion ( 2 ) is trained by minimizing weighted L2 loss : θ∗ = arg min θ L ( θ ) = arg min θ ∫ 1 0 λtEX0 , Xt‖sθ ( Xt , X̄ , t ) −∇log pt|0 ( Xt|X0 ) ‖22dt , ( 4 ) where pt|0 ( Xt|X0 ) is the probability density function ( pdf ) of the conditional distribution ( 3 ) and λt = 1− e− ∫ t 0 βsds . The distribution ( 3 ) is Gaussian , so we have ∇ log pt|0 ( Xt|X0 ) = − Xt −X0e− 1 2 ∫ t 0 βsds − X̄ ( 1− e− 12 ∫ t 0 βsds ) 1− e− ∫ t 0 βsds . ( 5 ) At training , time variable t is sampled uniformly from [ 0 , 1 ] , noisy samplesXt are generated according to the formula ( 3 ) and the formula ( 5 ) is used to calculate loss function L on these samples . Note that Xt can be sampled without the necessity to calculate intermediate values { Xs } 0 < s < t which makes optimization task ( 4 ) time and memory efficient . A well-trained reverse diffusion ( 2 ) has trajectories that are close to those of the forward diffusion ( 1 ) , so generating data with this DPM can be performed by sampling X̂1 from the prior N ( X̄ , I ) and solving SDE ( 2 ) backwards in time . The described above DPM was introduced by Popov et al . ( 2021 ) for text-to-speech task and we adapt it for our purposes . We put X̄ = ϕ ( X0 ) where ϕ is the encoder , i.e . X̄ is the “ average voice ” mel-spectrogram which we want to transform into that of the target voice . We condition the decoder sθ = sθ ( X̂t , X̄ , gt ( Y ) , t ) on some trainable function gt ( Y ) to provide it with information about the target speaker ( Y stands for forward trajectories of the target mel-spectrogram at inference and the ones of the training mel-spectrogram at training ) . This function is a neural network trained jointly with the decoder . We experimented with three input types for this network : • d-only – the input is the speaker embedding extracted from the target mel-spectrogram Y0 with the pre-trained speaker verification network employed in ( Jia et al. , 2018 ) ; • wodyn – in addition , the noisy target mel-spectrogram Yt is used as input ; • whole – in addition , the whole dynamics of the target mel-spectrogram under forward diffusion { Ys|s = 0.5/15 , 1.5/15 , .. , 14.5/15 } is used as input . The decoder architecture is based on U-Net ( Ronneberger et al. , 2015 ) and is the same as in GradTTS but with four times more channels to better capture the whole range of human voices . The speaker conditioning network gt ( Y ) is composed of 2D convolutions and MLPs and described in detail in Appendix H. Its output is 128-dimensional vector which is broadcast-concatenated to the concatenation of X̂t and X̄ as additional 128 channels . | This paper tackles the problem of one-shot many-to-many voice conversion (with both unseen source and target speakers) using a diffusion model on mel-spectrograms. The authors propose a dedicated architecture able to generalize to unseen speakers naturally (without relying on Phoetic Posteriorgrams (PPGs) as in previous works) by conditioning the diffusion model on "average-phoneme" spectrograms together with the target speech. They also address the important problem of the errors arising from using the Euler-Maruyama scheme with large discretization steps by proposing a novel SDE solver, allowing to perform voice conversion with as few as 6 diffusion steps. They demonstrate the applicability of this solver on other modalities (CIFAR-10) using on unconditional model. Extensive and rigorous experiments are conducted on the voice conversion task and an accompanying web page is provided with numerous and convincing voice conversions. | SP:b0ee9546d10f3d86e90ae5a873a4eb524b1fee48 |
AASEG: ATTENTION AWARE NETWORK FOR REAL TIME SEMANTIC SEGMENTATION | 1 INTRODUCTION . Semantic Segmentation is a fundamental problem in computer vision where the goal is to label each and every pixel in the image to its appropriate class . Since it is required to be deployed in real world settings like robots and autonomous vehicles , hence there is a need to balance the speed vs performance tradeoff . Fully Convolutional Network ( FCN ) ( Long et al. , 2015 ) composed of convolutional layers was one of the first works to get strong semantic representation . However , this method was not able to capture boundary information accurately . Atrous convolutions ( Yu and Koltun , 2015 ) at the last several stages of their network was used to give feature maps with strong semantic representation , thus solving the problem with FCN based architectures . However , this comes at at the cost of increased computational complexity . For real world navigation tasks like autonomous driving , there is a need to improve the Frame Per Second ( FPS ) . Chen et al . ( 2017 ) and ( Yu and Koltun , 2015 ) used dilated convolutions to increase the recpetive field while maintaining the number of parameters . SegNet ( Badrinarayanan et al. , 2017 ) utilizes a small network structure and the skip-connected method to achieve improved FPS . ( Mehta et al. , 2018 ) , ( Paszke et al. , 2016 ) and ( Poudel et al. , 2019 ) proposed unique approaches to tackle real time semantic segmentation problem . 2 RELATED WORK . ( Fu et al. , 2019 ) introduces spatial-wise and channelwise attention modules to enhance the recpetive field ( Paszke et al. , 2016 ) trims a a lot of convolution filters to reduce computation . ICNet ( Zhao et al. , 2018a ) proposed an image cascade network using multiresolution branches . ( Iandola et al. , 2016 ) allows the neural network to find the critical channels of the feature map and select the most suitable channels by itself . ESPNet ( Mehta et al. , 2018 ) introduces an efficient spatial pyramid ( ESP ) , which brings great improvement in both speed and performance . Bilateral Segmentation Network ( BiSeNet ) ( Yu et al. , 2018a ) used two parts : Spatial Path ( SP ) is used to get with the loss of spatial information and Context Path ( CP ) for compressing the receptive field . ( Yu et al. , 2020 ) used multi-path framework to combine the low-level details and high-level semantics . ( Li et al. , 2019b ) utilizes a light-weight backbone to speed up its network and a multi scale feature aggregation to improve accuracy . SwiftNet ( Orsic et al. , 2019 ) used lateral connections to restore the prediction resolution while maintaining the speed . ( Lin et al. , 2017 ) uses a multipath refinement network to refine the feature but ignores the global context feature . The speed-Accuracy performance comparison of state of the art methods on the Cityscapes test set is shown in Figure 1 : . We summarize our main contributions as follows : • We propose a novel Attention Aware Network ( AASeg ) for real time semantic segmentation . • Our network is comprised of three parts : Spatial Attention ( SA ) module to capture the spatial dependencies of the feature maps , Channel Attention ( CA ) module to extract high level semantic information and Multi Scale Context ( MSC ) module to learn the information flow between feature maps of consecutive levels . • Detailed experiments and analysis indicate the efficacy of our proposed network in not only improving the performance but also FPS . We achieve results on par with previous state of the art using Cityscapes , Camvid and on ADE20K datasets . 3 BACKGROUND . 3.1 SPATIAL INFORMATION . The spatial information of the image is important to predict the detailed output for semantic segmentation . Modern existing approaches uses encoding of spatial information . DUC ( Wang et al. , 2018 ) , PSPNet ( Zhao et al. , 2017 ) , DeepLab v2 ( Chen et al. , 2017 ) use the dilated convolution to preserve the spatial size of the feature map . 3.2 CONTEXT INFORMATION . Semantic segmentation requires context information to generate a high-quality result . Most of the used methods enlarge the receptive field or fuse different contextual information . ( Chen et al. , 2017 ) , ( Wang et al. , 2018 ) and ( Yu and Koltun , 2015 ) use different dilation rates in convolution layers to capture different scale contextual information . In ( Chen et al. , 2017 ) , an “ ASPP ” module is used to capture context information of different receptive field . PSPNet ( Zhao et al. , 2017 ) applies a “ PSP ” module which contains several different scales of average pooling layers . 3.3 ATTENTION MECHANISM . ( Hu et al. , 2018 ) applied channel attention for image recognition and achieve the state-of-the-art . ( Yu et al. , 2018b ) proposed a network that learns the global context as attention and revise the features . Multi scale network along with a custom attention module was proposed by ( Sagar and Soundrapandiyan , 2020 ) for semantic segmentation . 3.4 FEATURE FUSION . The spatial information captured by the Spatial Path consists of rich detailed information . The output feature of the Context Path is made up of contextual information . Feature fusion was used to achieve state of the art results on image classification , object detection and instance segmentation using DMSANet ( Sagar , 2021b ) . 4 PROPOSED METHOD . 4.1 DATASET . The following datasets have been used to benchmark our results : 1 . Cityscapes It is used for urban street segmentation . The 5000 annotated images are used in our experiments which are divided into 2975 , 500 and 1525 images for training , validation , and testing respectively . 2 . ADE20K This dataset contains labels of 150 object categories . The dataset includes 20k,2k and 3k images for training , validation and testing respectively . 3 . CamVid This dataset is used for semantic segmentation for autonomous driving scenarios . It is composed of 701 densely annotated images . 4.2 NETWORK ARCHITECTURE . The fundamental goal in semantic segmentation is to map an RGB image X ∈ RH×W×3 to a semantic map Y ∈ RH×W×C with the same spatial resolution H ×W , where C is the number of classes of objects present in image . The input image X is converted to a set of feature maps Fl where l=1 , ... ,3 from each network stage , where Fl ∈ RHlWlCl is a Cl-dimensional feature map . Our network dosen ’ t use any backbone to extract features from the input image unlike many previous architectures . The input image is first passed through a block comprising of convolutional , batch normalization and ReLU activation function . We express a convolution layer Wn ( x ) as follows : Wn ( x ) = Wn×n x+ b ( 1 ) where represents the convolution operator , Wn×n represents the n × n convolutional kernel , x represents the input data and b represents the bias vector . 4.3 MULTI SCALE CONTEXT MODULE . The feature map after passing through convolutional block is split to three different parts with 1×1 convolution , 3×3 convolution and 5×5 convolution respectively . The individual feature maps are then fused together . The output fused feature map is first convolved with a 1 × 1 convolution to reduce the number of channels from 2048 to 256 . The feature map produced is of size H ×W × nc , where H , W are the height and width of the feature map , and nc denotes the number of channels . The input feature map is convolved with dilated convolution layers with increasing dilation rates of 3 , 6 and 12 . The dilated convolution layer input at every stage is formed by concatenating the input feature map with the outputs from previous convolutions . At the final step , the outputs from the three dilated convolutions are concatenated with the input feature map . 4.4 SPATIAL ATTENTION MODULE . The spatial attention module is used for capturing the spatial dependencies of the feature maps . The spatial attention ( SA ) module in our network is defined below : fSA ( x ) = fsigmoid ( W2 ( fReLU ( W1 ( x ) ) ) ) ( 2 ) where W1 and W2 denotes the first and second 1× 1 convolution layer respectively , x denotes the input data , fSigmoid denotes the sigmoid function , fReLU denotes the ReLu activation function . The spatial attention module used in this work is shown in Figure 2 : 4.5 CHANNEL ATTENTION MODULE . The channel attention module is used for extracting high level multi-scale semantic information . The channel attention ( CA ) module in our network is defined below : fCA ( x ) = fsigmoid ( W2 ( fReLU ( W1f 1 AvgPool ( x ) ) ) ) ( 3 ) where W1 and W2 denotes the first and second 1 × 1 convolution layer , x denotes the input data . f1AvgPool denotes the global average pooling function , fSigmoid denotes the Sigmoid function , fReLU denotes ReLU activation function . The channel attention module used in this work is shown in Figure 3 : 4.6 AGGREGATION . We denote the concatenation operation as follows : xconcat = x1 ⊕ x2 ⊕ x3 ( 4 ) where ⊕ represents the concatenation operator and x1 , x2 and x3 represents the features of the two branches . The AASeg module can be denoted as follow : xAASeg = ( ( fSA ( xconcat ) ⊗ xconcat ) ⊕ ( fCA ( xconcat ) ⊗ xconcat ) ⊕ ( fMSC ( xconcat ) ⊗ xconcat ) ) ( 5 ) where ⊕ represents the concatenation operator , fCA represents the channel attention module mentioned in Equation 2 , fSA represents the spatial attention module , fMSC represents the multi scale attention and xconcat represents the combined feature . we use ConvXi to denote the operations of ith block . Therefore , the output of ith block is calculated as follows : xi = ConvXi ( xi−1 , ki ) ( 6 ) where ConvX includes one of each convolutional layer , batch normalization layer and ReLU activation layer , ki is the kernel size of convolutional layer , xi−1 and xi are the input and output of ith block . Fusion operation is used to combine high-level features with low-level features using Equation 7 : xoutput = F ( x1 , x2 , . . . , xn ) ( 7 ) The overall structure of our proposed AASeg network is shown in Figure 4 . 4.7 LOSS FUNCTIONS . We use cross-entropy loss function to weigh the difference between the forward propagation result of the network and the ground truth of the samples . The cross-entropy loss is calculated as defined in Equation 8 : Lce = 1 N N∑ n=1 [ yn log ŷn + ( 1− yn ) log ( 1− ŷn ) ] ( 8 ) where N denotes the total number of samples , yn denotes the probability that the forward propagation result is true , and 1-yn denotes the probability that the forward propagation result is false . We also use the auxiliary supervision loss Laux to improve the model performance thus making it easier to optimize . The auxiliary loss can be defined as : Laux = − 1 BN B∑ i=1 N∑ j=1 K∑ k=1 I ( gij = k ) log exp ( pij , k ) ∑K m=1 exp ( pij , m ) ( 9 ) I ( gij = k ) = { 1 , gij = k 0 , otherwise ( 10 ) where B is the mini batch size , N is the number of pixels in every batch ; K is the number of categories ; pijk is the prediction of the jth pixel in the ith sample for the kth class , I ( gij = k ) is a function which is defined in Equation 10 . The class attention loss Lcls from channel attention module is also used . The class attention loss is defined as follows : Lcls = − 1 BN B∑ i=1 N∑ j=1 K∑ k=1 I ( gij = k ) log exp ( aij , k ) ∑K m=1 exp ( aij , m ) ( 11 ) where aijk is the value generated of the class attention map of the jth pixel in the ith sample for the kth class . We combine the three terms to balance final loss term as follows : L = λ1Lce + λ2Lcls + λ3Laux ( 12 ) where λ1 , λ2 and λ3 are set as 1 , 0.5 and 0.5 to balance the loss . | This manuscript proposes an Attention Aware Network for real-time semantic segmentation. This network is designed from scratch, which contains a spatial attention module, a channel attention module and a multi scale context module. It achieves an impressive tradeoff between accuracy and inference speed on three datasets. However, this manuscript seems like an unfinished tech report. The motivation is not clear. The proposed method is not novel. The details are missing. The writing and organization are poor. There are so many problems in this manuscript. | SP:f3213d13f562d5b9464582d7a66c483fe6bf956d |
AASEG: ATTENTION AWARE NETWORK FOR REAL TIME SEMANTIC SEGMENTATION | 1 INTRODUCTION . Semantic Segmentation is a fundamental problem in computer vision where the goal is to label each and every pixel in the image to its appropriate class . Since it is required to be deployed in real world settings like robots and autonomous vehicles , hence there is a need to balance the speed vs performance tradeoff . Fully Convolutional Network ( FCN ) ( Long et al. , 2015 ) composed of convolutional layers was one of the first works to get strong semantic representation . However , this method was not able to capture boundary information accurately . Atrous convolutions ( Yu and Koltun , 2015 ) at the last several stages of their network was used to give feature maps with strong semantic representation , thus solving the problem with FCN based architectures . However , this comes at at the cost of increased computational complexity . For real world navigation tasks like autonomous driving , there is a need to improve the Frame Per Second ( FPS ) . Chen et al . ( 2017 ) and ( Yu and Koltun , 2015 ) used dilated convolutions to increase the recpetive field while maintaining the number of parameters . SegNet ( Badrinarayanan et al. , 2017 ) utilizes a small network structure and the skip-connected method to achieve improved FPS . ( Mehta et al. , 2018 ) , ( Paszke et al. , 2016 ) and ( Poudel et al. , 2019 ) proposed unique approaches to tackle real time semantic segmentation problem . 2 RELATED WORK . ( Fu et al. , 2019 ) introduces spatial-wise and channelwise attention modules to enhance the recpetive field ( Paszke et al. , 2016 ) trims a a lot of convolution filters to reduce computation . ICNet ( Zhao et al. , 2018a ) proposed an image cascade network using multiresolution branches . ( Iandola et al. , 2016 ) allows the neural network to find the critical channels of the feature map and select the most suitable channels by itself . ESPNet ( Mehta et al. , 2018 ) introduces an efficient spatial pyramid ( ESP ) , which brings great improvement in both speed and performance . Bilateral Segmentation Network ( BiSeNet ) ( Yu et al. , 2018a ) used two parts : Spatial Path ( SP ) is used to get with the loss of spatial information and Context Path ( CP ) for compressing the receptive field . ( Yu et al. , 2020 ) used multi-path framework to combine the low-level details and high-level semantics . ( Li et al. , 2019b ) utilizes a light-weight backbone to speed up its network and a multi scale feature aggregation to improve accuracy . SwiftNet ( Orsic et al. , 2019 ) used lateral connections to restore the prediction resolution while maintaining the speed . ( Lin et al. , 2017 ) uses a multipath refinement network to refine the feature but ignores the global context feature . The speed-Accuracy performance comparison of state of the art methods on the Cityscapes test set is shown in Figure 1 : . We summarize our main contributions as follows : • We propose a novel Attention Aware Network ( AASeg ) for real time semantic segmentation . • Our network is comprised of three parts : Spatial Attention ( SA ) module to capture the spatial dependencies of the feature maps , Channel Attention ( CA ) module to extract high level semantic information and Multi Scale Context ( MSC ) module to learn the information flow between feature maps of consecutive levels . • Detailed experiments and analysis indicate the efficacy of our proposed network in not only improving the performance but also FPS . We achieve results on par with previous state of the art using Cityscapes , Camvid and on ADE20K datasets . 3 BACKGROUND . 3.1 SPATIAL INFORMATION . The spatial information of the image is important to predict the detailed output for semantic segmentation . Modern existing approaches uses encoding of spatial information . DUC ( Wang et al. , 2018 ) , PSPNet ( Zhao et al. , 2017 ) , DeepLab v2 ( Chen et al. , 2017 ) use the dilated convolution to preserve the spatial size of the feature map . 3.2 CONTEXT INFORMATION . Semantic segmentation requires context information to generate a high-quality result . Most of the used methods enlarge the receptive field or fuse different contextual information . ( Chen et al. , 2017 ) , ( Wang et al. , 2018 ) and ( Yu and Koltun , 2015 ) use different dilation rates in convolution layers to capture different scale contextual information . In ( Chen et al. , 2017 ) , an “ ASPP ” module is used to capture context information of different receptive field . PSPNet ( Zhao et al. , 2017 ) applies a “ PSP ” module which contains several different scales of average pooling layers . 3.3 ATTENTION MECHANISM . ( Hu et al. , 2018 ) applied channel attention for image recognition and achieve the state-of-the-art . ( Yu et al. , 2018b ) proposed a network that learns the global context as attention and revise the features . Multi scale network along with a custom attention module was proposed by ( Sagar and Soundrapandiyan , 2020 ) for semantic segmentation . 3.4 FEATURE FUSION . The spatial information captured by the Spatial Path consists of rich detailed information . The output feature of the Context Path is made up of contextual information . Feature fusion was used to achieve state of the art results on image classification , object detection and instance segmentation using DMSANet ( Sagar , 2021b ) . 4 PROPOSED METHOD . 4.1 DATASET . The following datasets have been used to benchmark our results : 1 . Cityscapes It is used for urban street segmentation . The 5000 annotated images are used in our experiments which are divided into 2975 , 500 and 1525 images for training , validation , and testing respectively . 2 . ADE20K This dataset contains labels of 150 object categories . The dataset includes 20k,2k and 3k images for training , validation and testing respectively . 3 . CamVid This dataset is used for semantic segmentation for autonomous driving scenarios . It is composed of 701 densely annotated images . 4.2 NETWORK ARCHITECTURE . The fundamental goal in semantic segmentation is to map an RGB image X ∈ RH×W×3 to a semantic map Y ∈ RH×W×C with the same spatial resolution H ×W , where C is the number of classes of objects present in image . The input image X is converted to a set of feature maps Fl where l=1 , ... ,3 from each network stage , where Fl ∈ RHlWlCl is a Cl-dimensional feature map . Our network dosen ’ t use any backbone to extract features from the input image unlike many previous architectures . The input image is first passed through a block comprising of convolutional , batch normalization and ReLU activation function . We express a convolution layer Wn ( x ) as follows : Wn ( x ) = Wn×n x+ b ( 1 ) where represents the convolution operator , Wn×n represents the n × n convolutional kernel , x represents the input data and b represents the bias vector . 4.3 MULTI SCALE CONTEXT MODULE . The feature map after passing through convolutional block is split to three different parts with 1×1 convolution , 3×3 convolution and 5×5 convolution respectively . The individual feature maps are then fused together . The output fused feature map is first convolved with a 1 × 1 convolution to reduce the number of channels from 2048 to 256 . The feature map produced is of size H ×W × nc , where H , W are the height and width of the feature map , and nc denotes the number of channels . The input feature map is convolved with dilated convolution layers with increasing dilation rates of 3 , 6 and 12 . The dilated convolution layer input at every stage is formed by concatenating the input feature map with the outputs from previous convolutions . At the final step , the outputs from the three dilated convolutions are concatenated with the input feature map . 4.4 SPATIAL ATTENTION MODULE . The spatial attention module is used for capturing the spatial dependencies of the feature maps . The spatial attention ( SA ) module in our network is defined below : fSA ( x ) = fsigmoid ( W2 ( fReLU ( W1 ( x ) ) ) ) ( 2 ) where W1 and W2 denotes the first and second 1× 1 convolution layer respectively , x denotes the input data , fSigmoid denotes the sigmoid function , fReLU denotes the ReLu activation function . The spatial attention module used in this work is shown in Figure 2 : 4.5 CHANNEL ATTENTION MODULE . The channel attention module is used for extracting high level multi-scale semantic information . The channel attention ( CA ) module in our network is defined below : fCA ( x ) = fsigmoid ( W2 ( fReLU ( W1f 1 AvgPool ( x ) ) ) ) ( 3 ) where W1 and W2 denotes the first and second 1 × 1 convolution layer , x denotes the input data . f1AvgPool denotes the global average pooling function , fSigmoid denotes the Sigmoid function , fReLU denotes ReLU activation function . The channel attention module used in this work is shown in Figure 3 : 4.6 AGGREGATION . We denote the concatenation operation as follows : xconcat = x1 ⊕ x2 ⊕ x3 ( 4 ) where ⊕ represents the concatenation operator and x1 , x2 and x3 represents the features of the two branches . The AASeg module can be denoted as follow : xAASeg = ( ( fSA ( xconcat ) ⊗ xconcat ) ⊕ ( fCA ( xconcat ) ⊗ xconcat ) ⊕ ( fMSC ( xconcat ) ⊗ xconcat ) ) ( 5 ) where ⊕ represents the concatenation operator , fCA represents the channel attention module mentioned in Equation 2 , fSA represents the spatial attention module , fMSC represents the multi scale attention and xconcat represents the combined feature . we use ConvXi to denote the operations of ith block . Therefore , the output of ith block is calculated as follows : xi = ConvXi ( xi−1 , ki ) ( 6 ) where ConvX includes one of each convolutional layer , batch normalization layer and ReLU activation layer , ki is the kernel size of convolutional layer , xi−1 and xi are the input and output of ith block . Fusion operation is used to combine high-level features with low-level features using Equation 7 : xoutput = F ( x1 , x2 , . . . , xn ) ( 7 ) The overall structure of our proposed AASeg network is shown in Figure 4 . 4.7 LOSS FUNCTIONS . We use cross-entropy loss function to weigh the difference between the forward propagation result of the network and the ground truth of the samples . The cross-entropy loss is calculated as defined in Equation 8 : Lce = 1 N N∑ n=1 [ yn log ŷn + ( 1− yn ) log ( 1− ŷn ) ] ( 8 ) where N denotes the total number of samples , yn denotes the probability that the forward propagation result is true , and 1-yn denotes the probability that the forward propagation result is false . We also use the auxiliary supervision loss Laux to improve the model performance thus making it easier to optimize . The auxiliary loss can be defined as : Laux = − 1 BN B∑ i=1 N∑ j=1 K∑ k=1 I ( gij = k ) log exp ( pij , k ) ∑K m=1 exp ( pij , m ) ( 9 ) I ( gij = k ) = { 1 , gij = k 0 , otherwise ( 10 ) where B is the mini batch size , N is the number of pixels in every batch ; K is the number of categories ; pijk is the prediction of the jth pixel in the ith sample for the kth class , I ( gij = k ) is a function which is defined in Equation 10 . The class attention loss Lcls from channel attention module is also used . The class attention loss is defined as follows : Lcls = − 1 BN B∑ i=1 N∑ j=1 K∑ k=1 I ( gij = k ) log exp ( aij , k ) ∑K m=1 exp ( aij , m ) ( 11 ) where aijk is the value generated of the class attention map of the jth pixel in the ith sample for the kth class . We combine the three terms to balance final loss term as follows : L = λ1Lce + λ2Lcls + λ3Laux ( 12 ) where λ1 , λ2 and λ3 are set as 1 , 0.5 and 0.5 to balance the loss . | This paper proposes a real-time semantic segmentation. To save computation, there is no heavy backbone to extract features but just simple block containing conv/relu/bn. Then there three modules applied in parallel on top of the image features: multi-scale context module (conv with different dilation rate), spatial attention module and channel attention module. Experiments were done on three datasets and the proposed method can achieve a reasonable performance with higher FPS than previous methods. | SP:f3213d13f562d5b9464582d7a66c483fe6bf956d |
AASEG: ATTENTION AWARE NETWORK FOR REAL TIME SEMANTIC SEGMENTATION | 1 INTRODUCTION . Semantic Segmentation is a fundamental problem in computer vision where the goal is to label each and every pixel in the image to its appropriate class . Since it is required to be deployed in real world settings like robots and autonomous vehicles , hence there is a need to balance the speed vs performance tradeoff . Fully Convolutional Network ( FCN ) ( Long et al. , 2015 ) composed of convolutional layers was one of the first works to get strong semantic representation . However , this method was not able to capture boundary information accurately . Atrous convolutions ( Yu and Koltun , 2015 ) at the last several stages of their network was used to give feature maps with strong semantic representation , thus solving the problem with FCN based architectures . However , this comes at at the cost of increased computational complexity . For real world navigation tasks like autonomous driving , there is a need to improve the Frame Per Second ( FPS ) . Chen et al . ( 2017 ) and ( Yu and Koltun , 2015 ) used dilated convolutions to increase the recpetive field while maintaining the number of parameters . SegNet ( Badrinarayanan et al. , 2017 ) utilizes a small network structure and the skip-connected method to achieve improved FPS . ( Mehta et al. , 2018 ) , ( Paszke et al. , 2016 ) and ( Poudel et al. , 2019 ) proposed unique approaches to tackle real time semantic segmentation problem . 2 RELATED WORK . ( Fu et al. , 2019 ) introduces spatial-wise and channelwise attention modules to enhance the recpetive field ( Paszke et al. , 2016 ) trims a a lot of convolution filters to reduce computation . ICNet ( Zhao et al. , 2018a ) proposed an image cascade network using multiresolution branches . ( Iandola et al. , 2016 ) allows the neural network to find the critical channels of the feature map and select the most suitable channels by itself . ESPNet ( Mehta et al. , 2018 ) introduces an efficient spatial pyramid ( ESP ) , which brings great improvement in both speed and performance . Bilateral Segmentation Network ( BiSeNet ) ( Yu et al. , 2018a ) used two parts : Spatial Path ( SP ) is used to get with the loss of spatial information and Context Path ( CP ) for compressing the receptive field . ( Yu et al. , 2020 ) used multi-path framework to combine the low-level details and high-level semantics . ( Li et al. , 2019b ) utilizes a light-weight backbone to speed up its network and a multi scale feature aggregation to improve accuracy . SwiftNet ( Orsic et al. , 2019 ) used lateral connections to restore the prediction resolution while maintaining the speed . ( Lin et al. , 2017 ) uses a multipath refinement network to refine the feature but ignores the global context feature . The speed-Accuracy performance comparison of state of the art methods on the Cityscapes test set is shown in Figure 1 : . We summarize our main contributions as follows : • We propose a novel Attention Aware Network ( AASeg ) for real time semantic segmentation . • Our network is comprised of three parts : Spatial Attention ( SA ) module to capture the spatial dependencies of the feature maps , Channel Attention ( CA ) module to extract high level semantic information and Multi Scale Context ( MSC ) module to learn the information flow between feature maps of consecutive levels . • Detailed experiments and analysis indicate the efficacy of our proposed network in not only improving the performance but also FPS . We achieve results on par with previous state of the art using Cityscapes , Camvid and on ADE20K datasets . 3 BACKGROUND . 3.1 SPATIAL INFORMATION . The spatial information of the image is important to predict the detailed output for semantic segmentation . Modern existing approaches uses encoding of spatial information . DUC ( Wang et al. , 2018 ) , PSPNet ( Zhao et al. , 2017 ) , DeepLab v2 ( Chen et al. , 2017 ) use the dilated convolution to preserve the spatial size of the feature map . 3.2 CONTEXT INFORMATION . Semantic segmentation requires context information to generate a high-quality result . Most of the used methods enlarge the receptive field or fuse different contextual information . ( Chen et al. , 2017 ) , ( Wang et al. , 2018 ) and ( Yu and Koltun , 2015 ) use different dilation rates in convolution layers to capture different scale contextual information . In ( Chen et al. , 2017 ) , an “ ASPP ” module is used to capture context information of different receptive field . PSPNet ( Zhao et al. , 2017 ) applies a “ PSP ” module which contains several different scales of average pooling layers . 3.3 ATTENTION MECHANISM . ( Hu et al. , 2018 ) applied channel attention for image recognition and achieve the state-of-the-art . ( Yu et al. , 2018b ) proposed a network that learns the global context as attention and revise the features . Multi scale network along with a custom attention module was proposed by ( Sagar and Soundrapandiyan , 2020 ) for semantic segmentation . 3.4 FEATURE FUSION . The spatial information captured by the Spatial Path consists of rich detailed information . The output feature of the Context Path is made up of contextual information . Feature fusion was used to achieve state of the art results on image classification , object detection and instance segmentation using DMSANet ( Sagar , 2021b ) . 4 PROPOSED METHOD . 4.1 DATASET . The following datasets have been used to benchmark our results : 1 . Cityscapes It is used for urban street segmentation . The 5000 annotated images are used in our experiments which are divided into 2975 , 500 and 1525 images for training , validation , and testing respectively . 2 . ADE20K This dataset contains labels of 150 object categories . The dataset includes 20k,2k and 3k images for training , validation and testing respectively . 3 . CamVid This dataset is used for semantic segmentation for autonomous driving scenarios . It is composed of 701 densely annotated images . 4.2 NETWORK ARCHITECTURE . The fundamental goal in semantic segmentation is to map an RGB image X ∈ RH×W×3 to a semantic map Y ∈ RH×W×C with the same spatial resolution H ×W , where C is the number of classes of objects present in image . The input image X is converted to a set of feature maps Fl where l=1 , ... ,3 from each network stage , where Fl ∈ RHlWlCl is a Cl-dimensional feature map . Our network dosen ’ t use any backbone to extract features from the input image unlike many previous architectures . The input image is first passed through a block comprising of convolutional , batch normalization and ReLU activation function . We express a convolution layer Wn ( x ) as follows : Wn ( x ) = Wn×n x+ b ( 1 ) where represents the convolution operator , Wn×n represents the n × n convolutional kernel , x represents the input data and b represents the bias vector . 4.3 MULTI SCALE CONTEXT MODULE . The feature map after passing through convolutional block is split to three different parts with 1×1 convolution , 3×3 convolution and 5×5 convolution respectively . The individual feature maps are then fused together . The output fused feature map is first convolved with a 1 × 1 convolution to reduce the number of channels from 2048 to 256 . The feature map produced is of size H ×W × nc , where H , W are the height and width of the feature map , and nc denotes the number of channels . The input feature map is convolved with dilated convolution layers with increasing dilation rates of 3 , 6 and 12 . The dilated convolution layer input at every stage is formed by concatenating the input feature map with the outputs from previous convolutions . At the final step , the outputs from the three dilated convolutions are concatenated with the input feature map . 4.4 SPATIAL ATTENTION MODULE . The spatial attention module is used for capturing the spatial dependencies of the feature maps . The spatial attention ( SA ) module in our network is defined below : fSA ( x ) = fsigmoid ( W2 ( fReLU ( W1 ( x ) ) ) ) ( 2 ) where W1 and W2 denotes the first and second 1× 1 convolution layer respectively , x denotes the input data , fSigmoid denotes the sigmoid function , fReLU denotes the ReLu activation function . The spatial attention module used in this work is shown in Figure 2 : 4.5 CHANNEL ATTENTION MODULE . The channel attention module is used for extracting high level multi-scale semantic information . The channel attention ( CA ) module in our network is defined below : fCA ( x ) = fsigmoid ( W2 ( fReLU ( W1f 1 AvgPool ( x ) ) ) ) ( 3 ) where W1 and W2 denotes the first and second 1 × 1 convolution layer , x denotes the input data . f1AvgPool denotes the global average pooling function , fSigmoid denotes the Sigmoid function , fReLU denotes ReLU activation function . The channel attention module used in this work is shown in Figure 3 : 4.6 AGGREGATION . We denote the concatenation operation as follows : xconcat = x1 ⊕ x2 ⊕ x3 ( 4 ) where ⊕ represents the concatenation operator and x1 , x2 and x3 represents the features of the two branches . The AASeg module can be denoted as follow : xAASeg = ( ( fSA ( xconcat ) ⊗ xconcat ) ⊕ ( fCA ( xconcat ) ⊗ xconcat ) ⊕ ( fMSC ( xconcat ) ⊗ xconcat ) ) ( 5 ) where ⊕ represents the concatenation operator , fCA represents the channel attention module mentioned in Equation 2 , fSA represents the spatial attention module , fMSC represents the multi scale attention and xconcat represents the combined feature . we use ConvXi to denote the operations of ith block . Therefore , the output of ith block is calculated as follows : xi = ConvXi ( xi−1 , ki ) ( 6 ) where ConvX includes one of each convolutional layer , batch normalization layer and ReLU activation layer , ki is the kernel size of convolutional layer , xi−1 and xi are the input and output of ith block . Fusion operation is used to combine high-level features with low-level features using Equation 7 : xoutput = F ( x1 , x2 , . . . , xn ) ( 7 ) The overall structure of our proposed AASeg network is shown in Figure 4 . 4.7 LOSS FUNCTIONS . We use cross-entropy loss function to weigh the difference between the forward propagation result of the network and the ground truth of the samples . The cross-entropy loss is calculated as defined in Equation 8 : Lce = 1 N N∑ n=1 [ yn log ŷn + ( 1− yn ) log ( 1− ŷn ) ] ( 8 ) where N denotes the total number of samples , yn denotes the probability that the forward propagation result is true , and 1-yn denotes the probability that the forward propagation result is false . We also use the auxiliary supervision loss Laux to improve the model performance thus making it easier to optimize . The auxiliary loss can be defined as : Laux = − 1 BN B∑ i=1 N∑ j=1 K∑ k=1 I ( gij = k ) log exp ( pij , k ) ∑K m=1 exp ( pij , m ) ( 9 ) I ( gij = k ) = { 1 , gij = k 0 , otherwise ( 10 ) where B is the mini batch size , N is the number of pixels in every batch ; K is the number of categories ; pijk is the prediction of the jth pixel in the ith sample for the kth class , I ( gij = k ) is a function which is defined in Equation 10 . The class attention loss Lcls from channel attention module is also used . The class attention loss is defined as follows : Lcls = − 1 BN B∑ i=1 N∑ j=1 K∑ k=1 I ( gij = k ) log exp ( aij , k ) ∑K m=1 exp ( aij , m ) ( 11 ) where aijk is the value generated of the class attention map of the jth pixel in the ith sample for the kth class . We combine the three terms to balance final loss term as follows : L = λ1Lce + λ2Lcls + λ3Laux ( 12 ) where λ1 , λ2 and λ3 are set as 1 , 0.5 and 0.5 to balance the loss . | This paper proposed an attention aware network for real time semantic segmentation. Spatial attention and channel attention modules are used in the proposed method, as well as a multi scale context module. The proposed method is evaluated on some public datasets. | SP:f3213d13f562d5b9464582d7a66c483fe6bf956d |
Interest-based Item Representation Framework for Recommendation with Multi-Interests Capsule Network | 1 INTRODUCTION . With the rapid development of deep learning , great achievements have been made in recommendation , such as news recommendation , video recommendation , e-commence and advertisement . For recommendation systems , user interaction behaviors imply single or multi interests of the user , not only items themselves in the sequences . In general , users may have multi interests , e.g. , a user interacts with products from several different categories , including clothes , sports and food . The interests lay below the interactive behaviors which increases the difficult of capturing them directly . For recommendation systems , how to target different users ’ interests is key object . Series of deep learning models on click-through rate ( CTR ) prediction have been proposed . Wide & Deep ( Cheng et al. , 2016 ) jointly trains wide linear models and deep neural networks to combine the benefits of memorization and generalization for recommender systems . PNN , Deep Crossing ( Shan et al. , 2016 ) and DeepFM ( Guo et al. , 2017 ) try to extract low-order and high-order feature extraction by adopting a product layer . DIN ( Zhou et al. , 2018 ) uses attention mechanism to increase the pooling weights of similar items . DIEN ( Zhou et al. , 2019 ) introduces sequential model to build the sequential character instead of using item embedding directly in DIN . DIEN extracts hidden states of GRU as attention input and uses AUGRU to take place of traditional attention model . Generally , deep neural networks depict user interests from previous user-item interactions by utilizing item embedding vectors . To settle the diffusion matter of interests , ( Sabour et al. , 2017 ) proposes dynamic routing capsule network and ( Edraki et al. , 2020 ) successfully achieves the better understanding of relationship between objects than CNNs . Based on dynamic routing of capsule network , ( Li et al. , 2019 ) proposes MIND for dealing with user ’ s diverse interests in retrieval systems by representing one user with multiple vectors encoding the different aspects of the user ’ s interests . However , existing representation learning methods mainly focus on optimizing item-based mechanism between user interactive behavior sequences and candidate item , therefore ignore the impacting of label diffusing into each item in user interactive behavior sequences which weaken the intensity of user interests in back propagation of training model . Besides , with the network layers getting deeper and the dimension of embedding layer becoming larger , new method always need to redesign the model architecture , bring in new dataset or other information . Hence , a method is necessary to enhance the model performance and could be generally used in practice without redesigning whole model architecture or bringing extra information . In this paper , we propose a framework to learn interest-based item representations directly by introducing user Multi Interests Capsule Network ( MICN ) . To make the framework model-agnostic , Multi Interests Capsule Network is designed as an auxiliary task ( ( Pi et al. , 2019 ) introduces a auxiliary task to enhance the model performance ) to jointly learn item-based item representations and interest-based item representations . Interest-based item representation generated by MICN shared with original model takes user diverse interest information in whole model . The contributions of this paper can be summarized as follows : • A new framework to learn interest-based item representations by introducing user Multi Interests Capsule Network ( MICN ) . MICN is designed as auxiliary task and easily integrated with original recommendation model . • The new item representation is generated by concatenating interest-based item representation produced by MICN and item-based item representation produced by original model . • An approach of joint learning method and hyper parameter optimization with MICN framework . Experimental results show the great improvement of different recommendation models on benchmark datasets . We do experiments on different public datasets and compare results between original ranking model ( such as Wide & Deep , DIN , DIEN ) and models with auxiliary MICN . The results demonstrate the framework we proposed has better performance than the original model . 2 MODEL ARCHITECTURE . In recommendation system , i denotes the item id in practice , Iu denotes a set of user interactive behavior sequences ( clicked or viewed item sequences ) by an user u. pu is basic user profile information . it is the candidate item id and rt is related information of candidate item from recommendation system . f ( it , rt ) is the feature function of candidate item information from candidate item it and item related information rt . Usually , user interests is represented to learn the function f ( Iu , pu ) , including user profile pu and user interactive behavior sequences Iu . Hence the user interests can be formulated as Vu = f ( Iu , pu ) ( 1 ) where V = ( v1u , v 2 u , . . . , v K u ) ∈ Rh×K is the representation vector learning from user u information Iu and pu , h is user interest vector dimension and K is number of user vector dimension . Actually , v1u is represent one of user multi interests vector and V is the collection of them . Additionally , candidate item representation is to learn the function f ( it , rt ) of item id and it ’ s information . We can obtain the candidate item embeddings by et = f ( iu , rt ) ( 2 ) where et ∈ Rh is the representation embedding vector learn from target item id it and it ’ s related information rt . In fact , et always is the Embeddings & Pooling layer vector taken from User Multi Interest Capsule Networks which present in next section . Recommendation system is push ‘ good ’ item for a visiting web or app user . Hence , the score of measuring the relationship between candidate item and user interests is necessary . Then the score is define as fscore ( Vu , et ) = e T t v k u ( 3 ) We can obtain a value fscore to measure the distance between user interests and candidate items . Finally , according to the collection of user multi interests , recommendation system will select the top ‘ good ’ items for user . 2.1 USER MULTI INTERESTS CAPSULE NETWORKS . to circumvent some limitations of CNNs , capsules replace scalars with vectors to encode appearance feature representation , allowing better preservation of spatial relationships between whole objects and its parts . They also introduced the dynamic routing mechanism , which allows to weight the contributions of parts to a whole object differently at each inference step . The multi interests of user usually hide in interactive behavior sequences and profile information . Capsules ( Sabour et al. , 2017 ) replace scalars with vectors to encode appearance feature representation by assemble a group of neurons . Dynamic routing of capsule network learn the weight of different capsule which capable of encoding the relationship between the part and the whole . Capsule has better understanding of relationship between objects than CNNs ( Edraki et al. , 2020 ) . In recommendation system , MIND ( Li et al. , 2019 ) automatically capture the high-level multi interests of user through dynamic routing of capsule and achieve good performance in retrieval system of e-commence . Consequently , we propose item representation based on user multi interest capsule networks ( MICN ) to help CTR prediction model promoting their performance . We briefly introduce dynamic ( Behavior to Interest ) routing of capsule for learning the representation of multi interests from user profile information and interactive behavior sequences . For the input of each capsule vj = ‖sj‖2 1 + ‖sj‖2 sj ‖sj‖ ( 4 ) where vj is output and sj is all of input of capsule j. sj = ∑ i cijx̂j|i = ∑ cijWijxj ( 5 ) cij = exp ( bij ) ∑ k exp ( bik ) ( 6 ) Then through the dynamic routing to capture the high-level abstract interests from raw features of user . cij is softmax function for input bij . The behavior to interest ( B2I ) ( Li et al. , 2019 ) adaptively aggregate user ’ s view sequences into multi interests representing vectors . According to the routing logits Equation 3 , the bij is defined as bij = u T j Sei , i ∈ I , j ∈ { 1 , 2 , . . . , K } ( 7 ) where ei ∈ Rh , and vei is one of item i embedding vector of user interactive behavior sequences . uj ∈ Rh , j ∈ { 1 , 2 , . . . , K } the capsule vector of user interest , K is hyper parameter which is the number of user ’ s interests . S ∈ Rh×h the bilinear mapping matrix , link the user ’ s capsule interests and viewed sequences . bij is connection on user ’ s interest and item and keep them in the same vector space mapping . Though capture multi interest capsule vectors from user interactive behavior sequences and profile information , MIND introduced label-aware attention based on scaled dot product to measure the relationship between user ’ s interest and item information . In label aware attention layer , candidate item is query and user interest capsule is key and value , candidate item embedding vector is represented in interest capsule space . Then the scaled dot product formulate as vu = Vusoftmax ( pow ( V T u ei , p ) ) ( 8 ) Consequently , we obtain the probability P ( ei|vu ) and use softmax activate function to select ‘ Good ’ one . The training loss is defined as Lmicn = ∑ u , i logP ( ei|vu ) ( 9 ) where Lmicn is the loss of user multi interests capsule network loss . As match between user ’ s interests and candidate item , the item embeddings vector and user ’ s interest capsule vector have the same vector space which based on the user ’ s interests representation , which is very important for interest-based item representation . 2.2 INTEREST BASED ITEM EMBEDDINGS REPRESENTATION . Embedding representation based on deep learning is of much concern in practice ( Wang et al. , 2020 ) . In recommendation system , each recommending model has its own generating embeddings method . Many works introduced in Section 1 extract multi interests represented by item embeddings through designing network structures . Further , the impact of label diffusing into each item in user interactive behavior sequences weaken the intensity of user interests in back propagation of training model . Though dynamic routing capsule network can partially settle the diffusion matter of interests , specifically integrated with recommending model needing to redesign the model architecture is difficult and not generally used . Inspired by DIEN , auxiliary task play significant role in improving model performance . In order to refrain from redesigning the complex main task model architecture , an auxiliary task is introduced for better item representation learning . Therefore , we propose a model framework of user Multi Interest Capsule Network ( MICN ) as auxiliary task and share interest based item embeddings with main recommendation model task which is Deep Interest Network ( DIN ) . According to the Equation 7 , the scaled dot product , the distance between interest and item , makes item embedding vector is indicated by user interest capsule vectors . Besides , auxiliary task brings item embeddings expressed by user interest capsule in main model by sharing the item embedding vector . In main model , we define the item embeddings compose of two parts : e = eorig ⊕ eaux ( 10 ) where ⊕ is concatenation operator , eorig and eaux are item embedding vector correspondingly in the main target recommending model task and auxiliary task designed as MICN . This framework not only expands original model item embeddings referring user interest capsule , but also keep the original model architecture and still have the original model property . Consequently , the framework can be applied in general recommending model . Because of the item embedding vector of main recommending model is combination of original model item embeddings and auxiliary model item embeddings , original item embeddings is not influenced by auxiliary task and auxiliary item embedding is influence by two task which can be controlled by a hyper parameter . Hence , the total loss of whole model is formulate as L = Lmain + λLmicn ( 11 ) where L is the total loss of the whole model , Lmain is the loss of main model and Lmicn is the loss of user interest capsule network . λ is the hyper parameters which adjust the balance of loss and auxiliary task loss . For the Lmicn , label aware attention layer need positive samples in constructing the loss function . Hence negative samples of label item are masked so that make the loss work when training the whole model . During model training process , the item embedding e will receive two parts of back propagating gradient : main task and auxiliary task . eorig only receive main model gradient∇emain and eaux receive auxiliary model ( MICN ) gradient ∇emain as well as main original model gradient ∇eaux main in order to make eaux fit for main model . The gradient of auxiliary task is update following ∇eaux = ( 1 − ϕ ) ∇eaux aux + ϕ∇eaux main where ϕ ∈ [ 0 , 1 ] . How to choose the suitable hyper parameter ϕ will be discussed in the experiment . | This paper studies the item recommendation task. It claims that existing studies ignore the correlation between user interests and candidate items and that they are not easily extendable. To deal with this, an auxiliary task fulfilled by Multi Interests Capsule Network (MICN) is proposed to extend the existing frameworks. Experiments are conducted via the click-through rate (CTR) prediction. The MICN can improve very slightly on the Amazon books dataset. | SP:ef4df6d694edf2d0c18d04329204745aa2e8aa9b |
Interest-based Item Representation Framework for Recommendation with Multi-Interests Capsule Network | 1 INTRODUCTION . With the rapid development of deep learning , great achievements have been made in recommendation , such as news recommendation , video recommendation , e-commence and advertisement . For recommendation systems , user interaction behaviors imply single or multi interests of the user , not only items themselves in the sequences . In general , users may have multi interests , e.g. , a user interacts with products from several different categories , including clothes , sports and food . The interests lay below the interactive behaviors which increases the difficult of capturing them directly . For recommendation systems , how to target different users ’ interests is key object . Series of deep learning models on click-through rate ( CTR ) prediction have been proposed . Wide & Deep ( Cheng et al. , 2016 ) jointly trains wide linear models and deep neural networks to combine the benefits of memorization and generalization for recommender systems . PNN , Deep Crossing ( Shan et al. , 2016 ) and DeepFM ( Guo et al. , 2017 ) try to extract low-order and high-order feature extraction by adopting a product layer . DIN ( Zhou et al. , 2018 ) uses attention mechanism to increase the pooling weights of similar items . DIEN ( Zhou et al. , 2019 ) introduces sequential model to build the sequential character instead of using item embedding directly in DIN . DIEN extracts hidden states of GRU as attention input and uses AUGRU to take place of traditional attention model . Generally , deep neural networks depict user interests from previous user-item interactions by utilizing item embedding vectors . To settle the diffusion matter of interests , ( Sabour et al. , 2017 ) proposes dynamic routing capsule network and ( Edraki et al. , 2020 ) successfully achieves the better understanding of relationship between objects than CNNs . Based on dynamic routing of capsule network , ( Li et al. , 2019 ) proposes MIND for dealing with user ’ s diverse interests in retrieval systems by representing one user with multiple vectors encoding the different aspects of the user ’ s interests . However , existing representation learning methods mainly focus on optimizing item-based mechanism between user interactive behavior sequences and candidate item , therefore ignore the impacting of label diffusing into each item in user interactive behavior sequences which weaken the intensity of user interests in back propagation of training model . Besides , with the network layers getting deeper and the dimension of embedding layer becoming larger , new method always need to redesign the model architecture , bring in new dataset or other information . Hence , a method is necessary to enhance the model performance and could be generally used in practice without redesigning whole model architecture or bringing extra information . In this paper , we propose a framework to learn interest-based item representations directly by introducing user Multi Interests Capsule Network ( MICN ) . To make the framework model-agnostic , Multi Interests Capsule Network is designed as an auxiliary task ( ( Pi et al. , 2019 ) introduces a auxiliary task to enhance the model performance ) to jointly learn item-based item representations and interest-based item representations . Interest-based item representation generated by MICN shared with original model takes user diverse interest information in whole model . The contributions of this paper can be summarized as follows : • A new framework to learn interest-based item representations by introducing user Multi Interests Capsule Network ( MICN ) . MICN is designed as auxiliary task and easily integrated with original recommendation model . • The new item representation is generated by concatenating interest-based item representation produced by MICN and item-based item representation produced by original model . • An approach of joint learning method and hyper parameter optimization with MICN framework . Experimental results show the great improvement of different recommendation models on benchmark datasets . We do experiments on different public datasets and compare results between original ranking model ( such as Wide & Deep , DIN , DIEN ) and models with auxiliary MICN . The results demonstrate the framework we proposed has better performance than the original model . 2 MODEL ARCHITECTURE . In recommendation system , i denotes the item id in practice , Iu denotes a set of user interactive behavior sequences ( clicked or viewed item sequences ) by an user u. pu is basic user profile information . it is the candidate item id and rt is related information of candidate item from recommendation system . f ( it , rt ) is the feature function of candidate item information from candidate item it and item related information rt . Usually , user interests is represented to learn the function f ( Iu , pu ) , including user profile pu and user interactive behavior sequences Iu . Hence the user interests can be formulated as Vu = f ( Iu , pu ) ( 1 ) where V = ( v1u , v 2 u , . . . , v K u ) ∈ Rh×K is the representation vector learning from user u information Iu and pu , h is user interest vector dimension and K is number of user vector dimension . Actually , v1u is represent one of user multi interests vector and V is the collection of them . Additionally , candidate item representation is to learn the function f ( it , rt ) of item id and it ’ s information . We can obtain the candidate item embeddings by et = f ( iu , rt ) ( 2 ) where et ∈ Rh is the representation embedding vector learn from target item id it and it ’ s related information rt . In fact , et always is the Embeddings & Pooling layer vector taken from User Multi Interest Capsule Networks which present in next section . Recommendation system is push ‘ good ’ item for a visiting web or app user . Hence , the score of measuring the relationship between candidate item and user interests is necessary . Then the score is define as fscore ( Vu , et ) = e T t v k u ( 3 ) We can obtain a value fscore to measure the distance between user interests and candidate items . Finally , according to the collection of user multi interests , recommendation system will select the top ‘ good ’ items for user . 2.1 USER MULTI INTERESTS CAPSULE NETWORKS . to circumvent some limitations of CNNs , capsules replace scalars with vectors to encode appearance feature representation , allowing better preservation of spatial relationships between whole objects and its parts . They also introduced the dynamic routing mechanism , which allows to weight the contributions of parts to a whole object differently at each inference step . The multi interests of user usually hide in interactive behavior sequences and profile information . Capsules ( Sabour et al. , 2017 ) replace scalars with vectors to encode appearance feature representation by assemble a group of neurons . Dynamic routing of capsule network learn the weight of different capsule which capable of encoding the relationship between the part and the whole . Capsule has better understanding of relationship between objects than CNNs ( Edraki et al. , 2020 ) . In recommendation system , MIND ( Li et al. , 2019 ) automatically capture the high-level multi interests of user through dynamic routing of capsule and achieve good performance in retrieval system of e-commence . Consequently , we propose item representation based on user multi interest capsule networks ( MICN ) to help CTR prediction model promoting their performance . We briefly introduce dynamic ( Behavior to Interest ) routing of capsule for learning the representation of multi interests from user profile information and interactive behavior sequences . For the input of each capsule vj = ‖sj‖2 1 + ‖sj‖2 sj ‖sj‖ ( 4 ) where vj is output and sj is all of input of capsule j. sj = ∑ i cijx̂j|i = ∑ cijWijxj ( 5 ) cij = exp ( bij ) ∑ k exp ( bik ) ( 6 ) Then through the dynamic routing to capture the high-level abstract interests from raw features of user . cij is softmax function for input bij . The behavior to interest ( B2I ) ( Li et al. , 2019 ) adaptively aggregate user ’ s view sequences into multi interests representing vectors . According to the routing logits Equation 3 , the bij is defined as bij = u T j Sei , i ∈ I , j ∈ { 1 , 2 , . . . , K } ( 7 ) where ei ∈ Rh , and vei is one of item i embedding vector of user interactive behavior sequences . uj ∈ Rh , j ∈ { 1 , 2 , . . . , K } the capsule vector of user interest , K is hyper parameter which is the number of user ’ s interests . S ∈ Rh×h the bilinear mapping matrix , link the user ’ s capsule interests and viewed sequences . bij is connection on user ’ s interest and item and keep them in the same vector space mapping . Though capture multi interest capsule vectors from user interactive behavior sequences and profile information , MIND introduced label-aware attention based on scaled dot product to measure the relationship between user ’ s interest and item information . In label aware attention layer , candidate item is query and user interest capsule is key and value , candidate item embedding vector is represented in interest capsule space . Then the scaled dot product formulate as vu = Vusoftmax ( pow ( V T u ei , p ) ) ( 8 ) Consequently , we obtain the probability P ( ei|vu ) and use softmax activate function to select ‘ Good ’ one . The training loss is defined as Lmicn = ∑ u , i logP ( ei|vu ) ( 9 ) where Lmicn is the loss of user multi interests capsule network loss . As match between user ’ s interests and candidate item , the item embeddings vector and user ’ s interest capsule vector have the same vector space which based on the user ’ s interests representation , which is very important for interest-based item representation . 2.2 INTEREST BASED ITEM EMBEDDINGS REPRESENTATION . Embedding representation based on deep learning is of much concern in practice ( Wang et al. , 2020 ) . In recommendation system , each recommending model has its own generating embeddings method . Many works introduced in Section 1 extract multi interests represented by item embeddings through designing network structures . Further , the impact of label diffusing into each item in user interactive behavior sequences weaken the intensity of user interests in back propagation of training model . Though dynamic routing capsule network can partially settle the diffusion matter of interests , specifically integrated with recommending model needing to redesign the model architecture is difficult and not generally used . Inspired by DIEN , auxiliary task play significant role in improving model performance . In order to refrain from redesigning the complex main task model architecture , an auxiliary task is introduced for better item representation learning . Therefore , we propose a model framework of user Multi Interest Capsule Network ( MICN ) as auxiliary task and share interest based item embeddings with main recommendation model task which is Deep Interest Network ( DIN ) . According to the Equation 7 , the scaled dot product , the distance between interest and item , makes item embedding vector is indicated by user interest capsule vectors . Besides , auxiliary task brings item embeddings expressed by user interest capsule in main model by sharing the item embedding vector . In main model , we define the item embeddings compose of two parts : e = eorig ⊕ eaux ( 10 ) where ⊕ is concatenation operator , eorig and eaux are item embedding vector correspondingly in the main target recommending model task and auxiliary task designed as MICN . This framework not only expands original model item embeddings referring user interest capsule , but also keep the original model architecture and still have the original model property . Consequently , the framework can be applied in general recommending model . Because of the item embedding vector of main recommending model is combination of original model item embeddings and auxiliary model item embeddings , original item embeddings is not influenced by auxiliary task and auxiliary item embedding is influence by two task which can be controlled by a hyper parameter . Hence , the total loss of whole model is formulate as L = Lmain + λLmicn ( 11 ) where L is the total loss of the whole model , Lmain is the loss of main model and Lmicn is the loss of user interest capsule network . λ is the hyper parameters which adjust the balance of loss and auxiliary task loss . For the Lmicn , label aware attention layer need positive samples in constructing the loss function . Hence negative samples of label item are masked so that make the loss work when training the whole model . During model training process , the item embedding e will receive two parts of back propagating gradient : main task and auxiliary task . eorig only receive main model gradient∇emain and eaux receive auxiliary model ( MICN ) gradient ∇emain as well as main original model gradient ∇eaux main in order to make eaux fit for main model . The gradient of auxiliary task is update following ∇eaux = ( 1 − ϕ ) ∇eaux aux + ϕ∇eaux main where ϕ ∈ [ 0 , 1 ] . How to choose the suitable hyper parameter ϕ will be discussed in the experiment . | This paper introduces an interest-based item representation learning method with a multi-interest capsule network. The authors use an auxiliary task to learn interest-based item representations, which are further combined with item representations learned from their features for final recommendation. Experiments on two domains of the Amazon product dataset and an industrial dataset show that the proposed method outperforms several baseline methods. | SP:ef4df6d694edf2d0c18d04329204745aa2e8aa9b |
Interest-based Item Representation Framework for Recommendation with Multi-Interests Capsule Network | 1 INTRODUCTION . With the rapid development of deep learning , great achievements have been made in recommendation , such as news recommendation , video recommendation , e-commence and advertisement . For recommendation systems , user interaction behaviors imply single or multi interests of the user , not only items themselves in the sequences . In general , users may have multi interests , e.g. , a user interacts with products from several different categories , including clothes , sports and food . The interests lay below the interactive behaviors which increases the difficult of capturing them directly . For recommendation systems , how to target different users ’ interests is key object . Series of deep learning models on click-through rate ( CTR ) prediction have been proposed . Wide & Deep ( Cheng et al. , 2016 ) jointly trains wide linear models and deep neural networks to combine the benefits of memorization and generalization for recommender systems . PNN , Deep Crossing ( Shan et al. , 2016 ) and DeepFM ( Guo et al. , 2017 ) try to extract low-order and high-order feature extraction by adopting a product layer . DIN ( Zhou et al. , 2018 ) uses attention mechanism to increase the pooling weights of similar items . DIEN ( Zhou et al. , 2019 ) introduces sequential model to build the sequential character instead of using item embedding directly in DIN . DIEN extracts hidden states of GRU as attention input and uses AUGRU to take place of traditional attention model . Generally , deep neural networks depict user interests from previous user-item interactions by utilizing item embedding vectors . To settle the diffusion matter of interests , ( Sabour et al. , 2017 ) proposes dynamic routing capsule network and ( Edraki et al. , 2020 ) successfully achieves the better understanding of relationship between objects than CNNs . Based on dynamic routing of capsule network , ( Li et al. , 2019 ) proposes MIND for dealing with user ’ s diverse interests in retrieval systems by representing one user with multiple vectors encoding the different aspects of the user ’ s interests . However , existing representation learning methods mainly focus on optimizing item-based mechanism between user interactive behavior sequences and candidate item , therefore ignore the impacting of label diffusing into each item in user interactive behavior sequences which weaken the intensity of user interests in back propagation of training model . Besides , with the network layers getting deeper and the dimension of embedding layer becoming larger , new method always need to redesign the model architecture , bring in new dataset or other information . Hence , a method is necessary to enhance the model performance and could be generally used in practice without redesigning whole model architecture or bringing extra information . In this paper , we propose a framework to learn interest-based item representations directly by introducing user Multi Interests Capsule Network ( MICN ) . To make the framework model-agnostic , Multi Interests Capsule Network is designed as an auxiliary task ( ( Pi et al. , 2019 ) introduces a auxiliary task to enhance the model performance ) to jointly learn item-based item representations and interest-based item representations . Interest-based item representation generated by MICN shared with original model takes user diverse interest information in whole model . The contributions of this paper can be summarized as follows : • A new framework to learn interest-based item representations by introducing user Multi Interests Capsule Network ( MICN ) . MICN is designed as auxiliary task and easily integrated with original recommendation model . • The new item representation is generated by concatenating interest-based item representation produced by MICN and item-based item representation produced by original model . • An approach of joint learning method and hyper parameter optimization with MICN framework . Experimental results show the great improvement of different recommendation models on benchmark datasets . We do experiments on different public datasets and compare results between original ranking model ( such as Wide & Deep , DIN , DIEN ) and models with auxiliary MICN . The results demonstrate the framework we proposed has better performance than the original model . 2 MODEL ARCHITECTURE . In recommendation system , i denotes the item id in practice , Iu denotes a set of user interactive behavior sequences ( clicked or viewed item sequences ) by an user u. pu is basic user profile information . it is the candidate item id and rt is related information of candidate item from recommendation system . f ( it , rt ) is the feature function of candidate item information from candidate item it and item related information rt . Usually , user interests is represented to learn the function f ( Iu , pu ) , including user profile pu and user interactive behavior sequences Iu . Hence the user interests can be formulated as Vu = f ( Iu , pu ) ( 1 ) where V = ( v1u , v 2 u , . . . , v K u ) ∈ Rh×K is the representation vector learning from user u information Iu and pu , h is user interest vector dimension and K is number of user vector dimension . Actually , v1u is represent one of user multi interests vector and V is the collection of them . Additionally , candidate item representation is to learn the function f ( it , rt ) of item id and it ’ s information . We can obtain the candidate item embeddings by et = f ( iu , rt ) ( 2 ) where et ∈ Rh is the representation embedding vector learn from target item id it and it ’ s related information rt . In fact , et always is the Embeddings & Pooling layer vector taken from User Multi Interest Capsule Networks which present in next section . Recommendation system is push ‘ good ’ item for a visiting web or app user . Hence , the score of measuring the relationship between candidate item and user interests is necessary . Then the score is define as fscore ( Vu , et ) = e T t v k u ( 3 ) We can obtain a value fscore to measure the distance between user interests and candidate items . Finally , according to the collection of user multi interests , recommendation system will select the top ‘ good ’ items for user . 2.1 USER MULTI INTERESTS CAPSULE NETWORKS . to circumvent some limitations of CNNs , capsules replace scalars with vectors to encode appearance feature representation , allowing better preservation of spatial relationships between whole objects and its parts . They also introduced the dynamic routing mechanism , which allows to weight the contributions of parts to a whole object differently at each inference step . The multi interests of user usually hide in interactive behavior sequences and profile information . Capsules ( Sabour et al. , 2017 ) replace scalars with vectors to encode appearance feature representation by assemble a group of neurons . Dynamic routing of capsule network learn the weight of different capsule which capable of encoding the relationship between the part and the whole . Capsule has better understanding of relationship between objects than CNNs ( Edraki et al. , 2020 ) . In recommendation system , MIND ( Li et al. , 2019 ) automatically capture the high-level multi interests of user through dynamic routing of capsule and achieve good performance in retrieval system of e-commence . Consequently , we propose item representation based on user multi interest capsule networks ( MICN ) to help CTR prediction model promoting their performance . We briefly introduce dynamic ( Behavior to Interest ) routing of capsule for learning the representation of multi interests from user profile information and interactive behavior sequences . For the input of each capsule vj = ‖sj‖2 1 + ‖sj‖2 sj ‖sj‖ ( 4 ) where vj is output and sj is all of input of capsule j. sj = ∑ i cijx̂j|i = ∑ cijWijxj ( 5 ) cij = exp ( bij ) ∑ k exp ( bik ) ( 6 ) Then through the dynamic routing to capture the high-level abstract interests from raw features of user . cij is softmax function for input bij . The behavior to interest ( B2I ) ( Li et al. , 2019 ) adaptively aggregate user ’ s view sequences into multi interests representing vectors . According to the routing logits Equation 3 , the bij is defined as bij = u T j Sei , i ∈ I , j ∈ { 1 , 2 , . . . , K } ( 7 ) where ei ∈ Rh , and vei is one of item i embedding vector of user interactive behavior sequences . uj ∈ Rh , j ∈ { 1 , 2 , . . . , K } the capsule vector of user interest , K is hyper parameter which is the number of user ’ s interests . S ∈ Rh×h the bilinear mapping matrix , link the user ’ s capsule interests and viewed sequences . bij is connection on user ’ s interest and item and keep them in the same vector space mapping . Though capture multi interest capsule vectors from user interactive behavior sequences and profile information , MIND introduced label-aware attention based on scaled dot product to measure the relationship between user ’ s interest and item information . In label aware attention layer , candidate item is query and user interest capsule is key and value , candidate item embedding vector is represented in interest capsule space . Then the scaled dot product formulate as vu = Vusoftmax ( pow ( V T u ei , p ) ) ( 8 ) Consequently , we obtain the probability P ( ei|vu ) and use softmax activate function to select ‘ Good ’ one . The training loss is defined as Lmicn = ∑ u , i logP ( ei|vu ) ( 9 ) where Lmicn is the loss of user multi interests capsule network loss . As match between user ’ s interests and candidate item , the item embeddings vector and user ’ s interest capsule vector have the same vector space which based on the user ’ s interests representation , which is very important for interest-based item representation . 2.2 INTEREST BASED ITEM EMBEDDINGS REPRESENTATION . Embedding representation based on deep learning is of much concern in practice ( Wang et al. , 2020 ) . In recommendation system , each recommending model has its own generating embeddings method . Many works introduced in Section 1 extract multi interests represented by item embeddings through designing network structures . Further , the impact of label diffusing into each item in user interactive behavior sequences weaken the intensity of user interests in back propagation of training model . Though dynamic routing capsule network can partially settle the diffusion matter of interests , specifically integrated with recommending model needing to redesign the model architecture is difficult and not generally used . Inspired by DIEN , auxiliary task play significant role in improving model performance . In order to refrain from redesigning the complex main task model architecture , an auxiliary task is introduced for better item representation learning . Therefore , we propose a model framework of user Multi Interest Capsule Network ( MICN ) as auxiliary task and share interest based item embeddings with main recommendation model task which is Deep Interest Network ( DIN ) . According to the Equation 7 , the scaled dot product , the distance between interest and item , makes item embedding vector is indicated by user interest capsule vectors . Besides , auxiliary task brings item embeddings expressed by user interest capsule in main model by sharing the item embedding vector . In main model , we define the item embeddings compose of two parts : e = eorig ⊕ eaux ( 10 ) where ⊕ is concatenation operator , eorig and eaux are item embedding vector correspondingly in the main target recommending model task and auxiliary task designed as MICN . This framework not only expands original model item embeddings referring user interest capsule , but also keep the original model architecture and still have the original model property . Consequently , the framework can be applied in general recommending model . Because of the item embedding vector of main recommending model is combination of original model item embeddings and auxiliary model item embeddings , original item embeddings is not influenced by auxiliary task and auxiliary item embedding is influence by two task which can be controlled by a hyper parameter . Hence , the total loss of whole model is formulate as L = Lmain + λLmicn ( 11 ) where L is the total loss of the whole model , Lmain is the loss of main model and Lmicn is the loss of user interest capsule network . λ is the hyper parameters which adjust the balance of loss and auxiliary task loss . For the Lmicn , label aware attention layer need positive samples in constructing the loss function . Hence negative samples of label item are masked so that make the loss work when training the whole model . During model training process , the item embedding e will receive two parts of back propagating gradient : main task and auxiliary task . eorig only receive main model gradient∇emain and eaux receive auxiliary model ( MICN ) gradient ∇emain as well as main original model gradient ∇eaux main in order to make eaux fit for main model . The gradient of auxiliary task is update following ∇eaux = ( 1 − ϕ ) ∇eaux aux + ϕ∇eaux main where ϕ ∈ [ 0 , 1 ] . How to choose the suitable hyper parameter ϕ will be discussed in the experiment . | This paper proposes a user multi-interests capsule network (MICN) to improve better item representation. Because the proposed model is model-agnostic, it can be combined with existing models. The intuition of the proposed model is unclear. The proposed model does not consider the state-of-the-art CTR models. In this sense, I recommend that the authors clearly argue the key novelty of the proposed model and extensive experimental setup. | SP:ef4df6d694edf2d0c18d04329204745aa2e8aa9b |
Learning Neural Causal Models with Active Interventions | Discovering causal structures from data is a challenging inference problem of fundamental importance in all areas of science . The appealing scaling properties of neural networks have recently led to a surge of interest in differentiable neural network-based methods for learning causal structures from data . So far , differentiable causal discovery has focused on static datasets of observational or interventional origin . In this work , we introduce an active intervention-targeting mechanism which enables quick identification of the underlying causal structure of the data-generating process . Our method significantly reduces the required number of interactions compared with random intervention targeting and is applicable for both discrete and continuous optimization formulations of learning the underlying directed acyclic graph ( DAG ) from data . We examine the proposed method across multiple frameworks in a wide range of settings and demonstrate superior performance on multiple benchmarks from simulated to real-world data . 1 INTRODUCTION . Learning causal structure from data is a challenging but important task that lies at the heart of scientific reasoning and accompanying progress in many disciplines ( Sachs et al. , 2005 ; Hill et al. , 2016 ; Lauritzen & Spiegelhalter , 1988 ; Korb & Nicholson , 2010 ) . While there exists a plethora of methods for the task , computationally and statistically more efficient algorithms are highly desirable ( Heinze-Deml et al. , 2018 ) . As a result , there has been a surge in interest in differentiable structure learning and the combination of deep learning and causal inference ( Schölkopf et al. , 2021 ) . Such methods define a structural causal model with smoothly differentiable parameters that are adjusted to fit observational data ( Zheng et al. , 2018 ; Yu et al. , 2019 ; Zheng et al. , 2020 ; Bengio et al. , 2019 ; Lorch et al. , 2021 ; Annadani et al. , 2021 ) , although some methods can accept interventional data , thereby significantly improving the identification of the underlying data-generating process ( Ke et al. , 2019 ; Brouillard et al. , 2020 ; Lippe et al. , 2021 ) . However , the improvement critically depends on the experiments and interventions available to the learner . Despite advances in high-throughput methods for interventional data in specific fields ( Dixit et al. , 2016 ) , the acquisition of interventional samples in general settings tends to be costly , technically impossible or even unethical for specific interventions . There is , therefore , a need for efficient usage of the available interventional samples and efficient experimental design to keep the number of interventions to a minimum . A significant amount of prior work exists in causal structure learning that leverages active learning and experimental design to improve identifiability in a sequential manner . These approaches are either graph theoretical ( He & Geng , 2008 ; Eberhardt , 2012 ; Hyttinen et al. , 2013 ; Hauser & Bühlmann , 2014 ; Shanmugam et al. , 2015 ; Kocaoglu et al. , 2017b ; a ; Lindgren et al. , 2018 ; Ghassami et al. , 2018 ; 2019 ; Greenewald et al. , 2019 ; Squires et al. , 2020 ) , Bayesian ( Murphy , 2001 ; Tong & Koller , 2001 ; Masegosa & Moral , 2013 ; Cho et al. , 2016 ; Ness et al. , 2017 ; Agrawal et al. , 2019 ; Zemplenyi & Miller , 2021 ) or rely on Invariant Causal Prediction ( Gamella & Heinze-Deml , 2020 ) . These methods are typically computationally very expensive and do not scale well with respect to the number of variables or dataset size ( Heinze-Deml et al. , 2018 ) . A promising alternative is the use of active learning in a continuous optimization framework for causal structure learning from joint data . However , since the applicability of existing scores / heuristics for selecting inter- vention targets is limited for existing frameworks ( see §A.1 ) , current approaches rely on random and independent interventions and do not leverage the acquired evidence from processed experiments . We thus propose a novel method of active selection of intervention targets that can easily be incorporated into many differentiable causal discovery algorithms . Since most of these algorithms treat the adjacency matrix of the causal graph as a learned soft-adjacency , it is readily available for parametrized sampling of different hypothesis graphs . Our method looks for an intervention target that gives maximum disagreement between post-interventional sample distributions under these hypothesis graphs . We conjecture that interventions on such nodes will contain more information about the causal structure and hence enable more efficient learning . To the best of our knowledge , our paper is the first approach to combine both a continuous optimization framework and active causal structure learning from observational and interventional data . We summarize our contributions as follows : • We propose a novel approach for selecting interventions ( single and multi-target ) which identify the underlying graph efficiently and can be used for any differentiable causal discovery method . • We introduce a novel , scalable two-phase DAG sampling procedure which efficiently generates hypothesis DAGs based on a soft-adjacency matrix . • We examine the proposed intervention-targeting method across multiple differentiable causal discovery frameworks in a wide range of settings and demonstrate superior performance against established competitive baselines on multiple benchmarks from simulated to real-world data . • We provide empirical insights on the distribution of selected intervention targets and its connection to the ( causal ) topological order of the variables in the underlying system . 2 PRELIMINARIES . Structural Causal Model . An SCM ( Peters et al. , 2017 ) is defined over a set of random variables X1 , . . . , XM or justX for short and a directed acyclic graph ( DAG ) G = ( V , E ) over variable nodes V = { 1 , . . .M } . The random variables are connected by edges in E via functions fi and jointly independent noise variables Ui through Xi = fi ( Xpa ( i ) , Ui ) where Xpa ( i ) are Xi ’ s parents in G , and directed edges in the graph represent direct causation . The conditionals P ( Xi|Xpa ( i ) ) define the conditional distribution of Xi given its parents . Interventions . Interventions on Xi change the conditional distribution of P ( Xi|Xpa ( i ) ) to a different distribution , hence affecting the outcome of Xi . Interventions can be perfect ( hard ) or imperfect ( soft ) . Hard interventions entirely remove the dependencies of a variable Xi on its parents Xpa ( i ) , hence defining the conditional probability distribution of Xi by some P̃ ( Xi ) rather than P ( Xi|Xpa ( i ) ) . A more general form of intervention is the soft intervention , where the intervention changes the effect of the parents of Xi on itself by modifying the conditional distribution from Pi ( Xi|Xpa ( i ) ) to an alternative P̃i ( Xi|Xpa ( i ) ) . Dependency Structure Discovery from Interventions ( DSDI ) . We evaluate the proposed method under multiple continuous-optimization causal learning frameworks from fused ( observational and interventional ) data ( Bareinboim & Pearl , 2016 ) , one of them being DSDI ( Ke et al. , 2019 ) . The work of DSDI reformulates the problem of causal discovery from discrete data as a continuous optimization problem using neural networks . The framework proposes to learn the causal graph adjacency matrix as a matrix parameter γ of a neural network , and is trained using a 3-stage iterative procedure . The first stage involves sampling graphs under the model ’ s current belief in the graph structure and then training functional parameters P specifying the conditionals of the sampled graphs using observational data . Note how sampling a graph from γ can specify element-wise multiplications by 0 or 1 in the first layer of neural networks for the conditionals , to remove unallowed edges , with other network parameters P applicable to any sampled graph . The next stage is to evaluate the sampled graphs under interventional data and score these graphs accordingly . The final step is to update the learned adjacency matrix with the scores from stage 2 . This method performs competitively compared to many other methods . However , all intervention targets in stage 2 of DSDI are random and independent , a strategy that scales poorly to larger graphs . A better approach would have been active intervention targeting , elaborated in the next section . Differentiable Causal Discovery from Interventional Data ( DCDI ) . We also consider the work of DCDI ( Brouillard et al. , 2020 ) , which addresses causal discovery from continuous data as a continuous-constrained optimization problem using neural networks to model parameters of Gaus- lo g 1 0 D k sian distributions or normalizing flows ( Rezende & Mohamed , 2015 ) which represent conditional distributions . Unlike DSDI ’ s iterative training of the structural and functional parameters , DCDI optimizes the causal adjacency matrix and functional parameters jointly over the fused data space . But like DSDI , DCDI uses random and independent interventions . 3 ACTIVE INTERVENTION TARGETING . We present a score-based intervention design strategy , called Active Intervention Targeting ( AIT ) , which is applicable to many discrete and continuous optimization formulations of causal structure learning algorithms . Furthermore , we show how our proposed method can be integrated into recent differentiable causal discovery frameworks for guided exploration using interventional data . Assumptions . The proposed method assumes access to a belief state γ in the graph structure ( e.g. , in the form of a distribution over graphs , probabilistic adjacency matrix or a set of hypothetical graphs ) and functional parameters P characterizing the conditional relationships between variables ( constrained by the graphs sampled from the belief specified by γ ) . The proposed model does not have to assume causal sufficiency per se . However , it inherits the assumptions of the selected base framework , and this may include causal sufficiency depending on the base algorithm of choice . In case the underlying framework can handle unobserved variables and offers a generative method for interventional samples , then our method is also applicable 3.1 A SCORE FOR INTERVENTION TARGETING . Algorithm 1 Active Intervention Targeting ( AIT ) Input : Functional Parameters P , Graph Belief State γ , Interventional Target Space I Output : Intervention Target Ik∗ 1 : G ← Sample a set of hypothesis graphs from γ 2 : for each intervention target Ik in I do 3 : Pk ← Perform intervention Ik on P 4 : for each graph Gi in G do 5 : Sk , i← Draw samples from Pk on Gi 6 : Sk , i← Set variables in Ik to 0 7 : end for 8 : Compute : Dk ← ∑ i ( µ k i −µ k ) 2∑ i ∑ j ( S k , i j −µ k i ) 2 9 : end for . 10 : Target Intervention Ik∗ ← argmaxk ( Dk ) Given a graph belief state γ with its corresponding functional parameters P , and a possible set of intervention targets I ( single-node and multi-node intervention targets ) , we wish to select the most informative intervention target ( s ) Ik∗ ∈ I with respect to identifiability of the underlying structure . Such target ( s ) presumably yield relatively high discrepancies between samples drawn under different hypothesis graphs , indicating larger uncertainty about the target ’ s relation to its parents and/or children . We thus construct an F-test-inspired score to determine the Ik∗ exhibiting the highest discrepancies between post-interventional sample distributions generated by likely graph structures under fixed functional parameters P . In order to compare sample distributions over different graphs , we distinguish between two sources of variation : Variance between graphs ( VBG ) and variance within graphs ( VWG ) . While VBG characterizes the variance of sample means over multiple graphs , VWG accounts for the sample variance when a specific graph is fixed . As in DSDI and DCDI , we mask the contribution of the intervened variables Ik to VBG and VWG , and construct our discrepancy score D as a ratio D = VBGVWG . This discrepancy score attains high values for intervention targets of particular interest ( see Figure 1b for a landscape visualization ) . While VBG itself indicates for which intervention targets the model is unsettled about , an extension to the proposed variance ratio enables more control over the region of interest . Given a fixed set of graphs G and a fixed interventional sample size across all graphs , let us assume a scenario where multiple intervention targets attain high VBG . Assessing VWG allows us to distinguish between two extreme cases : ( a ) targets with sample populations that exhibit large VWG ( b ) targets with sample populations that exhibit low VWG . While high VBG in ( a ) might be induced by an insufficient sample size due to high variance in the interventional distribution itself , ( b ) clearly indicates high discrepancy between graphs and should be preferentially studied . Computational Details . We begin by sampling a set of graphs G = { Gi } , i = 1 , 2 , 3 , . . . from our graph structure belief state γ , however parametrized . This G will remain fixed for all considered interventions for the current experimental round . Then , we fix an intervention target Ik and apply the corresponding intervention to P , resulting in partially altered functional parameters Pk where some conditionals have been changed . Next , we draw interventional samples Sk , i from Pk on every graph Gi ∈ G and set variables in Ik to zero to mask of their contribution to the variance . Having collected all samples over the considered graphs for the specific intervention target Ik , we compute VBGk and VWGk as : VBGk = ∑ i ( µki − µk ) 2 and VWGk = ∑ i ∑ j ( Sk , ij − µ k i ) 2 where µk is a vector of the same dimension as any sample in S and denotes the overall sample-mean of the interventional setting , µki the corresponding mean for a specific graph Gi and S k , i j is the j-th sample of the i-th graph configuration . Finally , we construct the discrepancy score Dk of Ik as : Dk ← VBGk VWGk . In contrast to the original definition of the F-Score , we can ignore the normalization constants due to equal group size and degree-of-freedoms . Although the dependence between the variables is apparent from the connected causal structure , we approximate the variance of the multidimensional samples as the trace over the covariance matrix by assuming that the variables are independent . An outline of the method is provided in Algorithm 1 . | Structure learning from observational data is a long-standing challenge that could benefit from creative uses of interventional data. The authors explore the use of active learning in a continuous optimization framework to better traverse and narrow down the space of potential graphs that describe the data. The contribution is a heuristic to choose the "best" intervention target based on a comparison between samples from an intervened SCM with the current best guess of an underlying causal discovery algorithm (in which the proposal is embedded). | SP:21fe181d3c9a76a0c2e8dcfc5136398e09e6c663 |
Learning Neural Causal Models with Active Interventions | Discovering causal structures from data is a challenging inference problem of fundamental importance in all areas of science . The appealing scaling properties of neural networks have recently led to a surge of interest in differentiable neural network-based methods for learning causal structures from data . So far , differentiable causal discovery has focused on static datasets of observational or interventional origin . In this work , we introduce an active intervention-targeting mechanism which enables quick identification of the underlying causal structure of the data-generating process . Our method significantly reduces the required number of interactions compared with random intervention targeting and is applicable for both discrete and continuous optimization formulations of learning the underlying directed acyclic graph ( DAG ) from data . We examine the proposed method across multiple frameworks in a wide range of settings and demonstrate superior performance on multiple benchmarks from simulated to real-world data . 1 INTRODUCTION . Learning causal structure from data is a challenging but important task that lies at the heart of scientific reasoning and accompanying progress in many disciplines ( Sachs et al. , 2005 ; Hill et al. , 2016 ; Lauritzen & Spiegelhalter , 1988 ; Korb & Nicholson , 2010 ) . While there exists a plethora of methods for the task , computationally and statistically more efficient algorithms are highly desirable ( Heinze-Deml et al. , 2018 ) . As a result , there has been a surge in interest in differentiable structure learning and the combination of deep learning and causal inference ( Schölkopf et al. , 2021 ) . Such methods define a structural causal model with smoothly differentiable parameters that are adjusted to fit observational data ( Zheng et al. , 2018 ; Yu et al. , 2019 ; Zheng et al. , 2020 ; Bengio et al. , 2019 ; Lorch et al. , 2021 ; Annadani et al. , 2021 ) , although some methods can accept interventional data , thereby significantly improving the identification of the underlying data-generating process ( Ke et al. , 2019 ; Brouillard et al. , 2020 ; Lippe et al. , 2021 ) . However , the improvement critically depends on the experiments and interventions available to the learner . Despite advances in high-throughput methods for interventional data in specific fields ( Dixit et al. , 2016 ) , the acquisition of interventional samples in general settings tends to be costly , technically impossible or even unethical for specific interventions . There is , therefore , a need for efficient usage of the available interventional samples and efficient experimental design to keep the number of interventions to a minimum . A significant amount of prior work exists in causal structure learning that leverages active learning and experimental design to improve identifiability in a sequential manner . These approaches are either graph theoretical ( He & Geng , 2008 ; Eberhardt , 2012 ; Hyttinen et al. , 2013 ; Hauser & Bühlmann , 2014 ; Shanmugam et al. , 2015 ; Kocaoglu et al. , 2017b ; a ; Lindgren et al. , 2018 ; Ghassami et al. , 2018 ; 2019 ; Greenewald et al. , 2019 ; Squires et al. , 2020 ) , Bayesian ( Murphy , 2001 ; Tong & Koller , 2001 ; Masegosa & Moral , 2013 ; Cho et al. , 2016 ; Ness et al. , 2017 ; Agrawal et al. , 2019 ; Zemplenyi & Miller , 2021 ) or rely on Invariant Causal Prediction ( Gamella & Heinze-Deml , 2020 ) . These methods are typically computationally very expensive and do not scale well with respect to the number of variables or dataset size ( Heinze-Deml et al. , 2018 ) . A promising alternative is the use of active learning in a continuous optimization framework for causal structure learning from joint data . However , since the applicability of existing scores / heuristics for selecting inter- vention targets is limited for existing frameworks ( see §A.1 ) , current approaches rely on random and independent interventions and do not leverage the acquired evidence from processed experiments . We thus propose a novel method of active selection of intervention targets that can easily be incorporated into many differentiable causal discovery algorithms . Since most of these algorithms treat the adjacency matrix of the causal graph as a learned soft-adjacency , it is readily available for parametrized sampling of different hypothesis graphs . Our method looks for an intervention target that gives maximum disagreement between post-interventional sample distributions under these hypothesis graphs . We conjecture that interventions on such nodes will contain more information about the causal structure and hence enable more efficient learning . To the best of our knowledge , our paper is the first approach to combine both a continuous optimization framework and active causal structure learning from observational and interventional data . We summarize our contributions as follows : • We propose a novel approach for selecting interventions ( single and multi-target ) which identify the underlying graph efficiently and can be used for any differentiable causal discovery method . • We introduce a novel , scalable two-phase DAG sampling procedure which efficiently generates hypothesis DAGs based on a soft-adjacency matrix . • We examine the proposed intervention-targeting method across multiple differentiable causal discovery frameworks in a wide range of settings and demonstrate superior performance against established competitive baselines on multiple benchmarks from simulated to real-world data . • We provide empirical insights on the distribution of selected intervention targets and its connection to the ( causal ) topological order of the variables in the underlying system . 2 PRELIMINARIES . Structural Causal Model . An SCM ( Peters et al. , 2017 ) is defined over a set of random variables X1 , . . . , XM or justX for short and a directed acyclic graph ( DAG ) G = ( V , E ) over variable nodes V = { 1 , . . .M } . The random variables are connected by edges in E via functions fi and jointly independent noise variables Ui through Xi = fi ( Xpa ( i ) , Ui ) where Xpa ( i ) are Xi ’ s parents in G , and directed edges in the graph represent direct causation . The conditionals P ( Xi|Xpa ( i ) ) define the conditional distribution of Xi given its parents . Interventions . Interventions on Xi change the conditional distribution of P ( Xi|Xpa ( i ) ) to a different distribution , hence affecting the outcome of Xi . Interventions can be perfect ( hard ) or imperfect ( soft ) . Hard interventions entirely remove the dependencies of a variable Xi on its parents Xpa ( i ) , hence defining the conditional probability distribution of Xi by some P̃ ( Xi ) rather than P ( Xi|Xpa ( i ) ) . A more general form of intervention is the soft intervention , where the intervention changes the effect of the parents of Xi on itself by modifying the conditional distribution from Pi ( Xi|Xpa ( i ) ) to an alternative P̃i ( Xi|Xpa ( i ) ) . Dependency Structure Discovery from Interventions ( DSDI ) . We evaluate the proposed method under multiple continuous-optimization causal learning frameworks from fused ( observational and interventional ) data ( Bareinboim & Pearl , 2016 ) , one of them being DSDI ( Ke et al. , 2019 ) . The work of DSDI reformulates the problem of causal discovery from discrete data as a continuous optimization problem using neural networks . The framework proposes to learn the causal graph adjacency matrix as a matrix parameter γ of a neural network , and is trained using a 3-stage iterative procedure . The first stage involves sampling graphs under the model ’ s current belief in the graph structure and then training functional parameters P specifying the conditionals of the sampled graphs using observational data . Note how sampling a graph from γ can specify element-wise multiplications by 0 or 1 in the first layer of neural networks for the conditionals , to remove unallowed edges , with other network parameters P applicable to any sampled graph . The next stage is to evaluate the sampled graphs under interventional data and score these graphs accordingly . The final step is to update the learned adjacency matrix with the scores from stage 2 . This method performs competitively compared to many other methods . However , all intervention targets in stage 2 of DSDI are random and independent , a strategy that scales poorly to larger graphs . A better approach would have been active intervention targeting , elaborated in the next section . Differentiable Causal Discovery from Interventional Data ( DCDI ) . We also consider the work of DCDI ( Brouillard et al. , 2020 ) , which addresses causal discovery from continuous data as a continuous-constrained optimization problem using neural networks to model parameters of Gaus- lo g 1 0 D k sian distributions or normalizing flows ( Rezende & Mohamed , 2015 ) which represent conditional distributions . Unlike DSDI ’ s iterative training of the structural and functional parameters , DCDI optimizes the causal adjacency matrix and functional parameters jointly over the fused data space . But like DSDI , DCDI uses random and independent interventions . 3 ACTIVE INTERVENTION TARGETING . We present a score-based intervention design strategy , called Active Intervention Targeting ( AIT ) , which is applicable to many discrete and continuous optimization formulations of causal structure learning algorithms . Furthermore , we show how our proposed method can be integrated into recent differentiable causal discovery frameworks for guided exploration using interventional data . Assumptions . The proposed method assumes access to a belief state γ in the graph structure ( e.g. , in the form of a distribution over graphs , probabilistic adjacency matrix or a set of hypothetical graphs ) and functional parameters P characterizing the conditional relationships between variables ( constrained by the graphs sampled from the belief specified by γ ) . The proposed model does not have to assume causal sufficiency per se . However , it inherits the assumptions of the selected base framework , and this may include causal sufficiency depending on the base algorithm of choice . In case the underlying framework can handle unobserved variables and offers a generative method for interventional samples , then our method is also applicable 3.1 A SCORE FOR INTERVENTION TARGETING . Algorithm 1 Active Intervention Targeting ( AIT ) Input : Functional Parameters P , Graph Belief State γ , Interventional Target Space I Output : Intervention Target Ik∗ 1 : G ← Sample a set of hypothesis graphs from γ 2 : for each intervention target Ik in I do 3 : Pk ← Perform intervention Ik on P 4 : for each graph Gi in G do 5 : Sk , i← Draw samples from Pk on Gi 6 : Sk , i← Set variables in Ik to 0 7 : end for 8 : Compute : Dk ← ∑ i ( µ k i −µ k ) 2∑ i ∑ j ( S k , i j −µ k i ) 2 9 : end for . 10 : Target Intervention Ik∗ ← argmaxk ( Dk ) Given a graph belief state γ with its corresponding functional parameters P , and a possible set of intervention targets I ( single-node and multi-node intervention targets ) , we wish to select the most informative intervention target ( s ) Ik∗ ∈ I with respect to identifiability of the underlying structure . Such target ( s ) presumably yield relatively high discrepancies between samples drawn under different hypothesis graphs , indicating larger uncertainty about the target ’ s relation to its parents and/or children . We thus construct an F-test-inspired score to determine the Ik∗ exhibiting the highest discrepancies between post-interventional sample distributions generated by likely graph structures under fixed functional parameters P . In order to compare sample distributions over different graphs , we distinguish between two sources of variation : Variance between graphs ( VBG ) and variance within graphs ( VWG ) . While VBG characterizes the variance of sample means over multiple graphs , VWG accounts for the sample variance when a specific graph is fixed . As in DSDI and DCDI , we mask the contribution of the intervened variables Ik to VBG and VWG , and construct our discrepancy score D as a ratio D = VBGVWG . This discrepancy score attains high values for intervention targets of particular interest ( see Figure 1b for a landscape visualization ) . While VBG itself indicates for which intervention targets the model is unsettled about , an extension to the proposed variance ratio enables more control over the region of interest . Given a fixed set of graphs G and a fixed interventional sample size across all graphs , let us assume a scenario where multiple intervention targets attain high VBG . Assessing VWG allows us to distinguish between two extreme cases : ( a ) targets with sample populations that exhibit large VWG ( b ) targets with sample populations that exhibit low VWG . While high VBG in ( a ) might be induced by an insufficient sample size due to high variance in the interventional distribution itself , ( b ) clearly indicates high discrepancy between graphs and should be preferentially studied . Computational Details . We begin by sampling a set of graphs G = { Gi } , i = 1 , 2 , 3 , . . . from our graph structure belief state γ , however parametrized . This G will remain fixed for all considered interventions for the current experimental round . Then , we fix an intervention target Ik and apply the corresponding intervention to P , resulting in partially altered functional parameters Pk where some conditionals have been changed . Next , we draw interventional samples Sk , i from Pk on every graph Gi ∈ G and set variables in Ik to zero to mask of their contribution to the variance . Having collected all samples over the considered graphs for the specific intervention target Ik , we compute VBGk and VWGk as : VBGk = ∑ i ( µki − µk ) 2 and VWGk = ∑ i ∑ j ( Sk , ij − µ k i ) 2 where µk is a vector of the same dimension as any sample in S and denotes the overall sample-mean of the interventional setting , µki the corresponding mean for a specific graph Gi and S k , i j is the j-th sample of the i-th graph configuration . Finally , we construct the discrepancy score Dk of Ik as : Dk ← VBGk VWGk . In contrast to the original definition of the F-Score , we can ignore the normalization constants due to equal group size and degree-of-freedoms . Although the dependence between the variables is apparent from the connected causal structure , we approximate the variance of the multidimensional samples as the trace over the covariance matrix by assuming that the variables are independent . An outline of the method is provided in Algorithm 1 . | The paper proposes a method to select interventions that enable efficient identification of the underlying causal structure. In particular, the method picks the intervention that exhibits the highest discrepancies between post-interventional sample distributions generated. The authors provide experiment results to demonstrate that the proposed method has a better performance over the other baselines, and also improves the sample efficiency. | SP:21fe181d3c9a76a0c2e8dcfc5136398e09e6c663 |
Learning Neural Causal Models with Active Interventions | Discovering causal structures from data is a challenging inference problem of fundamental importance in all areas of science . The appealing scaling properties of neural networks have recently led to a surge of interest in differentiable neural network-based methods for learning causal structures from data . So far , differentiable causal discovery has focused on static datasets of observational or interventional origin . In this work , we introduce an active intervention-targeting mechanism which enables quick identification of the underlying causal structure of the data-generating process . Our method significantly reduces the required number of interactions compared with random intervention targeting and is applicable for both discrete and continuous optimization formulations of learning the underlying directed acyclic graph ( DAG ) from data . We examine the proposed method across multiple frameworks in a wide range of settings and demonstrate superior performance on multiple benchmarks from simulated to real-world data . 1 INTRODUCTION . Learning causal structure from data is a challenging but important task that lies at the heart of scientific reasoning and accompanying progress in many disciplines ( Sachs et al. , 2005 ; Hill et al. , 2016 ; Lauritzen & Spiegelhalter , 1988 ; Korb & Nicholson , 2010 ) . While there exists a plethora of methods for the task , computationally and statistically more efficient algorithms are highly desirable ( Heinze-Deml et al. , 2018 ) . As a result , there has been a surge in interest in differentiable structure learning and the combination of deep learning and causal inference ( Schölkopf et al. , 2021 ) . Such methods define a structural causal model with smoothly differentiable parameters that are adjusted to fit observational data ( Zheng et al. , 2018 ; Yu et al. , 2019 ; Zheng et al. , 2020 ; Bengio et al. , 2019 ; Lorch et al. , 2021 ; Annadani et al. , 2021 ) , although some methods can accept interventional data , thereby significantly improving the identification of the underlying data-generating process ( Ke et al. , 2019 ; Brouillard et al. , 2020 ; Lippe et al. , 2021 ) . However , the improvement critically depends on the experiments and interventions available to the learner . Despite advances in high-throughput methods for interventional data in specific fields ( Dixit et al. , 2016 ) , the acquisition of interventional samples in general settings tends to be costly , technically impossible or even unethical for specific interventions . There is , therefore , a need for efficient usage of the available interventional samples and efficient experimental design to keep the number of interventions to a minimum . A significant amount of prior work exists in causal structure learning that leverages active learning and experimental design to improve identifiability in a sequential manner . These approaches are either graph theoretical ( He & Geng , 2008 ; Eberhardt , 2012 ; Hyttinen et al. , 2013 ; Hauser & Bühlmann , 2014 ; Shanmugam et al. , 2015 ; Kocaoglu et al. , 2017b ; a ; Lindgren et al. , 2018 ; Ghassami et al. , 2018 ; 2019 ; Greenewald et al. , 2019 ; Squires et al. , 2020 ) , Bayesian ( Murphy , 2001 ; Tong & Koller , 2001 ; Masegosa & Moral , 2013 ; Cho et al. , 2016 ; Ness et al. , 2017 ; Agrawal et al. , 2019 ; Zemplenyi & Miller , 2021 ) or rely on Invariant Causal Prediction ( Gamella & Heinze-Deml , 2020 ) . These methods are typically computationally very expensive and do not scale well with respect to the number of variables or dataset size ( Heinze-Deml et al. , 2018 ) . A promising alternative is the use of active learning in a continuous optimization framework for causal structure learning from joint data . However , since the applicability of existing scores / heuristics for selecting inter- vention targets is limited for existing frameworks ( see §A.1 ) , current approaches rely on random and independent interventions and do not leverage the acquired evidence from processed experiments . We thus propose a novel method of active selection of intervention targets that can easily be incorporated into many differentiable causal discovery algorithms . Since most of these algorithms treat the adjacency matrix of the causal graph as a learned soft-adjacency , it is readily available for parametrized sampling of different hypothesis graphs . Our method looks for an intervention target that gives maximum disagreement between post-interventional sample distributions under these hypothesis graphs . We conjecture that interventions on such nodes will contain more information about the causal structure and hence enable more efficient learning . To the best of our knowledge , our paper is the first approach to combine both a continuous optimization framework and active causal structure learning from observational and interventional data . We summarize our contributions as follows : • We propose a novel approach for selecting interventions ( single and multi-target ) which identify the underlying graph efficiently and can be used for any differentiable causal discovery method . • We introduce a novel , scalable two-phase DAG sampling procedure which efficiently generates hypothesis DAGs based on a soft-adjacency matrix . • We examine the proposed intervention-targeting method across multiple differentiable causal discovery frameworks in a wide range of settings and demonstrate superior performance against established competitive baselines on multiple benchmarks from simulated to real-world data . • We provide empirical insights on the distribution of selected intervention targets and its connection to the ( causal ) topological order of the variables in the underlying system . 2 PRELIMINARIES . Structural Causal Model . An SCM ( Peters et al. , 2017 ) is defined over a set of random variables X1 , . . . , XM or justX for short and a directed acyclic graph ( DAG ) G = ( V , E ) over variable nodes V = { 1 , . . .M } . The random variables are connected by edges in E via functions fi and jointly independent noise variables Ui through Xi = fi ( Xpa ( i ) , Ui ) where Xpa ( i ) are Xi ’ s parents in G , and directed edges in the graph represent direct causation . The conditionals P ( Xi|Xpa ( i ) ) define the conditional distribution of Xi given its parents . Interventions . Interventions on Xi change the conditional distribution of P ( Xi|Xpa ( i ) ) to a different distribution , hence affecting the outcome of Xi . Interventions can be perfect ( hard ) or imperfect ( soft ) . Hard interventions entirely remove the dependencies of a variable Xi on its parents Xpa ( i ) , hence defining the conditional probability distribution of Xi by some P̃ ( Xi ) rather than P ( Xi|Xpa ( i ) ) . A more general form of intervention is the soft intervention , where the intervention changes the effect of the parents of Xi on itself by modifying the conditional distribution from Pi ( Xi|Xpa ( i ) ) to an alternative P̃i ( Xi|Xpa ( i ) ) . Dependency Structure Discovery from Interventions ( DSDI ) . We evaluate the proposed method under multiple continuous-optimization causal learning frameworks from fused ( observational and interventional ) data ( Bareinboim & Pearl , 2016 ) , one of them being DSDI ( Ke et al. , 2019 ) . The work of DSDI reformulates the problem of causal discovery from discrete data as a continuous optimization problem using neural networks . The framework proposes to learn the causal graph adjacency matrix as a matrix parameter γ of a neural network , and is trained using a 3-stage iterative procedure . The first stage involves sampling graphs under the model ’ s current belief in the graph structure and then training functional parameters P specifying the conditionals of the sampled graphs using observational data . Note how sampling a graph from γ can specify element-wise multiplications by 0 or 1 in the first layer of neural networks for the conditionals , to remove unallowed edges , with other network parameters P applicable to any sampled graph . The next stage is to evaluate the sampled graphs under interventional data and score these graphs accordingly . The final step is to update the learned adjacency matrix with the scores from stage 2 . This method performs competitively compared to many other methods . However , all intervention targets in stage 2 of DSDI are random and independent , a strategy that scales poorly to larger graphs . A better approach would have been active intervention targeting , elaborated in the next section . Differentiable Causal Discovery from Interventional Data ( DCDI ) . We also consider the work of DCDI ( Brouillard et al. , 2020 ) , which addresses causal discovery from continuous data as a continuous-constrained optimization problem using neural networks to model parameters of Gaus- lo g 1 0 D k sian distributions or normalizing flows ( Rezende & Mohamed , 2015 ) which represent conditional distributions . Unlike DSDI ’ s iterative training of the structural and functional parameters , DCDI optimizes the causal adjacency matrix and functional parameters jointly over the fused data space . But like DSDI , DCDI uses random and independent interventions . 3 ACTIVE INTERVENTION TARGETING . We present a score-based intervention design strategy , called Active Intervention Targeting ( AIT ) , which is applicable to many discrete and continuous optimization formulations of causal structure learning algorithms . Furthermore , we show how our proposed method can be integrated into recent differentiable causal discovery frameworks for guided exploration using interventional data . Assumptions . The proposed method assumes access to a belief state γ in the graph structure ( e.g. , in the form of a distribution over graphs , probabilistic adjacency matrix or a set of hypothetical graphs ) and functional parameters P characterizing the conditional relationships between variables ( constrained by the graphs sampled from the belief specified by γ ) . The proposed model does not have to assume causal sufficiency per se . However , it inherits the assumptions of the selected base framework , and this may include causal sufficiency depending on the base algorithm of choice . In case the underlying framework can handle unobserved variables and offers a generative method for interventional samples , then our method is also applicable 3.1 A SCORE FOR INTERVENTION TARGETING . Algorithm 1 Active Intervention Targeting ( AIT ) Input : Functional Parameters P , Graph Belief State γ , Interventional Target Space I Output : Intervention Target Ik∗ 1 : G ← Sample a set of hypothesis graphs from γ 2 : for each intervention target Ik in I do 3 : Pk ← Perform intervention Ik on P 4 : for each graph Gi in G do 5 : Sk , i← Draw samples from Pk on Gi 6 : Sk , i← Set variables in Ik to 0 7 : end for 8 : Compute : Dk ← ∑ i ( µ k i −µ k ) 2∑ i ∑ j ( S k , i j −µ k i ) 2 9 : end for . 10 : Target Intervention Ik∗ ← argmaxk ( Dk ) Given a graph belief state γ with its corresponding functional parameters P , and a possible set of intervention targets I ( single-node and multi-node intervention targets ) , we wish to select the most informative intervention target ( s ) Ik∗ ∈ I with respect to identifiability of the underlying structure . Such target ( s ) presumably yield relatively high discrepancies between samples drawn under different hypothesis graphs , indicating larger uncertainty about the target ’ s relation to its parents and/or children . We thus construct an F-test-inspired score to determine the Ik∗ exhibiting the highest discrepancies between post-interventional sample distributions generated by likely graph structures under fixed functional parameters P . In order to compare sample distributions over different graphs , we distinguish between two sources of variation : Variance between graphs ( VBG ) and variance within graphs ( VWG ) . While VBG characterizes the variance of sample means over multiple graphs , VWG accounts for the sample variance when a specific graph is fixed . As in DSDI and DCDI , we mask the contribution of the intervened variables Ik to VBG and VWG , and construct our discrepancy score D as a ratio D = VBGVWG . This discrepancy score attains high values for intervention targets of particular interest ( see Figure 1b for a landscape visualization ) . While VBG itself indicates for which intervention targets the model is unsettled about , an extension to the proposed variance ratio enables more control over the region of interest . Given a fixed set of graphs G and a fixed interventional sample size across all graphs , let us assume a scenario where multiple intervention targets attain high VBG . Assessing VWG allows us to distinguish between two extreme cases : ( a ) targets with sample populations that exhibit large VWG ( b ) targets with sample populations that exhibit low VWG . While high VBG in ( a ) might be induced by an insufficient sample size due to high variance in the interventional distribution itself , ( b ) clearly indicates high discrepancy between graphs and should be preferentially studied . Computational Details . We begin by sampling a set of graphs G = { Gi } , i = 1 , 2 , 3 , . . . from our graph structure belief state γ , however parametrized . This G will remain fixed for all considered interventions for the current experimental round . Then , we fix an intervention target Ik and apply the corresponding intervention to P , resulting in partially altered functional parameters Pk where some conditionals have been changed . Next , we draw interventional samples Sk , i from Pk on every graph Gi ∈ G and set variables in Ik to zero to mask of their contribution to the variance . Having collected all samples over the considered graphs for the specific intervention target Ik , we compute VBGk and VWGk as : VBGk = ∑ i ( µki − µk ) 2 and VWGk = ∑ i ∑ j ( Sk , ij − µ k i ) 2 where µk is a vector of the same dimension as any sample in S and denotes the overall sample-mean of the interventional setting , µki the corresponding mean for a specific graph Gi and S k , i j is the j-th sample of the i-th graph configuration . Finally , we construct the discrepancy score Dk of Ik as : Dk ← VBGk VWGk . In contrast to the original definition of the F-Score , we can ignore the normalization constants due to equal group size and degree-of-freedoms . Although the dependence between the variables is apparent from the connected causal structure , we approximate the variance of the multidimensional samples as the trace over the covariance matrix by assuming that the variables are independent . An outline of the method is provided in Algorithm 1 . | In this paper, the authors take a further step by introducing active interventions in differentiable neural network-based methods. The intervention target is selected based on maximizing the disagreement between the post-interventional sample distributions under the hypothesis graphs. The experimental results verify the effectiveness of the active strategy compared to random selection. | SP:21fe181d3c9a76a0c2e8dcfc5136398e09e6c663 |
Differentiable DAG Sampling | 1 INTRODUCTION . Directed Acyclic Graphs ( DAGs ) are important mathematical objects in many machine learning tasks . For example , a direct application of DAGs is to represent causal relationships in a system of variables . In this case , variables are represented as nodes and causal relationships are represented as directed edges . Hence , DAG learning has found many applications for causal discovery in biology , economics or planning ( Pearl , 1988 ; Ramsey et al. , 2017 ; Sachs et al. , 2005 ; Zhang et al. , 2013 ) . However , DAG learning is a challenging problem for two reasons . First , while DAG learning with data from randomized and controlled experiments is the gold-standard for causal discovery , experimental data might be hard or unethical to obtain in practice . Hence , a more common but also challenging setting is DAG learning from observational data which is possible under proper conditions ( Pearl , 2009 ; Spirtes et al. , 2000 ) . Second , the number of possible DAGs scales super-exponentially with the number of variables which makes DAG learning an NP hard problem ( Chickering et al. , 2012 ; Robinson , 1973 ) . A first traditional family of models for DAG learning are discrete score-based approaches . These approaches aim at solving the following discrete optimization problem : max G score ( X , G ) s.t . G ∈ discrete DAGs ( 1 ) where X denotes the observed data and the discrete DAGs space is composed of DAGs with unweighted ( present or absent ) edges . Examples of score functions are Bayesian Information Criteria ( BIC ) ( Chickering & Heckerman , 1997 ) or Minimum Description Length ( MDL ) ( Bouckaert , 1993 ) . Discrete approaches have two main limitations : ( 1 ) the optimization search space of discrete DAGs is large and constrained which often makes the problem intractable without further assumptions , and ( 2 ) the learning procedure is not differentiable and thus not amenable to gradient-based optimization , as done by most deep learning approaches . To mitigate these issues , a second more recent family of models for DAG learning proposes to leverage continuous optimization by using an augmented Lagrangian formulation ( Lachapelle et al. , 2020 ; Ng et al. , 2019 ; Wehenkel & Louppe , 2021 ; Yu et al. , 2019 ; Zheng et al. , 2018 ) . These approaches aim at solving the following optimization problem : max G score ( X , G ) s.t . h ( G ) = 0 ( 2 ) where h ( G ) is a smooth function over weighted graphs G which is equal to zero when the graph G satisfies the DAG constraints . Standard examples of score functions are ( negative ) cross-entropy ( Yu et al. , 2019 ) or Mean Squared Error ( MSE ) ( Ng et al. , 2019 ) when reconstructing the data . During optimization , G is a weighted graph where the discreteness and acyclicity constraints are relaxed . Hence , these continuous optimization approaches have two main limitations : ( 1 ) the augmented Lagrangian optimization is computationally expensive as it requires multiple complex dual ascent iterations , and ( 2 ) the discrete and acyclicity constraints are relaxed during optimization which does not guarantee valid discrete DAGs without non-differentiable pre- and post-processing as proposed by Causal Additive Model ( CAM ) ( Bühlmann et al. , 2014 ) . For a more comprehensive description , we recall the augmented Lagrangian optimization method in detail in App . A . In this paper , we focus on differentiable DAG learning methods and make the following contributions : • We propose a new probabilistic model over DAGs ( DP-DAG ) which is capable of fast and differentiable sampling . DP-DAG can be implemented in few lines of code using GumbelSinkhorn , Gumbel-Top-k and Gumbel-Softmax distributions to parametrize differentiable sampling over permutations and edges ( see Fig . 1 ) . • We propose a new method for DAG learning from observational data ( VI-DP-DAG ) which instantiates a general probabilistic formulation for DAG learning with DP-DAG and variational inference . VI-DP-DAG guarantees valid DAG outputs at any time during training . • We show in our experiments on established synthetic and real datasets that DP-DAG outperforms other differentiable DAG learning baselines for DAG structure and causal mechanisms learning while training one order of magnitude faster . 2 RELATED WORK . We differentiate between three types of DAG learning approaches : the discrete optimization approaches , the continuous optimization approaches and the sampling-based approaches . We refer to the survey ( Vowels et al. , 2021 ) for a more detailed overview of DAG learning approaches . Discrete optimization . First , to make the search space more tractable , discrete optimization approaches modify the original problem with additional assumptions on DAG treewidth ( Nie et al. , 2014 ; Scanagatta et al. , 2016 ) , ancestral constraints ( Chen et al. , 2016 ) or on the number of parents of each variable ( Viinikka et al. , 2020 ) . Other methods are based on greedy search ( Chickering , 2002 ) or discrete optimization of the topological order ( Park & Klabjan , 2017 ; Scanagatta et al. , 2015 ; Teyssier & Koller , 2005 ) . Another type of discrete optimization approaches are constraint-based methods . These methods explore the discrete DAG space by performing independence tests between observed variables ( Bühlmann et al. , 2014 ; Spirtes et al. , 2001 ) . Continuous optimization . Second , continuous approaches usually relax the discreteness and acyclicity constraints by using an augmented Lagrangian formulation of the optimization problem ( Lachapelle et al. , 2020 ; Ng et al. , 2019 ; Wehenkel & Louppe , 2021 ; Yu et al. , 2019 ; Zheng et al. , 2018 ) . Some approaches define the DAG structure from neural network weights ( Lachapelle et al. , 2020 ; Zheng et al. , 2018 ) while other approaches directly learn the DAG adjacency matrix ( Ng et al. , 2019 ; Wehenkel & Louppe , 2021 ; Yu et al. , 2019 ) . In contrast to these methods , VI-DP-DAG learns a probabilistic model over the DAG structure . Further , these approaches penalize DAG constraints violation in the augmented Lagrangian formulation but do not guarantee that they are fulfilled during training . Recently , Yu et al . ( 2021 ) propose to complement the augmented Lagrangian optimization with a second step projecting the learned graph on admissible solutions . Hence , contrary to VI-DPDAG , most of these approaches use non-differentiable processing steps – e.g . removing cycles and spurious edges – to output valid and high-quality DAGs . Common examples of processing steps are Preliminary Neighbors Selection ( PNS ) and CAM pruning ( Bühlmann et al. , 2014 ) . Sampling-based optimization . Third , other works use DAG sampling to estimate the posterior distribution over DAGs with MCMC ( Kuipers et al. , 2020 ; Niinimäki et al. , 2011 ; 2016 ; Talvitie et al. , 2020 ; Viinikka et al. , 2020 ) . While previous works improve the quality and speed of MCMC computations by sampling ( partial ) orders or making assumptions on the number of parents per node , they are still computationally extremely expensive ( Kuipers et al. , 2020 ; Niinimäki et al. , 2011 ; 2016 ; Viinikka et al. , 2020 ) . E.g. , Viinikka et al . ( 2020 ) recommend to run MCMC methods during 12 hours to sample from the posterior distribution over DAGs with 100 nodes . In contrast , VI-DP-DAG approximates the posterior distribution over DAG edges with variational inference and can sample very fast . E.g. , our VI-DP-DAG trains in around 190 seconds and samples in less than 1 second for a DAG with 100 nodes . Further , while the construction of the MCMC chains are generally non-differentiable , our DP-DAG is capable of fully differentiable DAG learning and can leverage gradient-based optimization . Other works propose optimization of discrete problems using differentiable probabilistic distribution over various discrete objects like subsets or spanning trees but not on DAG structures ( Grathwohl et al. , 2021 ; Karalias & Loukas , 2020 ; Paulus et al. , 2020 ) . Further , recent works combine differentiable edge sampling with Gumbel trick and Lagrangian optimization but do not define valid distributions over the full DAG structure ( Brouillard et al. , 2020 ; Ng et al. , 2019 ) . In contrast , DP-DAG does not require complex Lagrangian optimization and guarantees valid DAGs solutions at any time during training . Finally , Grosse et al . ( 2021 ) explores an orthogonal direction where the search space in sequential decision making problems is represented with a DAG . 3 PROBABILISTIC MODEL OVER DAGS . A Directed Acyclic Graph ( DAG ) is a graph G = ( V , E ) with n nodes x1 , ... , xn and m directed edges which does not exhibit directed cycles . A DAG always admits a valid permutation ( or linear ordering ) π : [ [ 1 , n ] ] → [ [ 1 , n ] ] of the nodes such that a node can not have a direct edge toward a node with lower rank i.e. , π ( i ) < π ( j ) implies no directed edge from node xπ ( j ) to node xπ ( i ) . Valid permutations are often not unique . Interestingly , this property has an equivalent matrix formulation : Theorem 1 . Lets A ∈ { 0 , 1 } n be the ( a priori non-triangular ) adjacency matrix associated with an arbitrary node labelling of the DAG G. The adjacency matrix A always admits a permutation matrix Π ∈ { 0 , 1 } n×n and an upper triangular matrix U ∈ { 0 , 1 } n×n such that A = ΠTUΠ . The permutation matrix Π directly corresponds to a valid component-wise permutation π . Hence , Th . 1 simply states that the matrix U is the new adjacency matrix where the new node labels are a valid permutation of the original node labels i.e . Aij = Uπ ( i ) π ( j ) such that Uπ ( i ) π ( j ) = 0 if π ( i ) < π ( j ) . The decomposition of the DAG adjacency matrix A in Th . 1 is generally not unique as a DAG G generally admits multiple valid linear permutations Π . Hence , we define the following probabilistic model over DAGs ( DP-DAG ) based on the adjacency matrix decomposition in Th . 1 : P ( A ) = ∑ Π∈P ( G ) , U∈Un P ( U ) P ( Π ) s.t . A = ΠTUΠ ( 3 ) where P ( G ) is the set of valid permutation matrices for the DAG G , Un is the space of binary upper-triangular matrices of size n× n , P ( Π ) is the distribution over permutations and P ( U ) is the distribution over edges consistent with the sampled permutation of the nodes . Note that the number of valid permutations |P ( G ) | can be exponentially large in the number of nodes which makes the exact computation of the probability of a given DAG adjacency matrix P ( A ) intractable for large graphs . However , DAG sampling does not require any enumeration of the valid linear permutations . Indeed , we propose a new differentiable DAG sampling method ( i.e . A ∼ Pφ , ψ ( A ) ) based on differentiable edge sampling ( i.e . U ∼ Pφ ( U ) ) and differentiable permutation sampling ( i.e . Π ∼ Pψ ( Π ) ) . The variables ψ and φ denote the parameters of the edge and permutation distributions . Differentiable edge sampling . The Bernoulli distribution is a well-suited distribution to model randomness over a discrete binary variable like an edge Uij ∈ { 0 , 1 } ∼ Ber ( p ) where p ∈ [ 0 , 1 ] is the probability for the edge to exist . Unfortunately , standard random sampling operations from the Bernoulli distribution are not differentiable . In contrast , the Gumbel-Softmax distribution allows for differentiable sampling and approximates the Bernoulli distribution ( Jang et al. , 2017 ) . The GumbelSoftmax distribution is defined on continuous variables , i.e . Ûij ∈ [ 0 , 1 ] ∼ Gumbel-Softmaxτ ( φ ) with φ ∈ [ 0 , 1 ] , where the temperature parameter τ allows to interpolate between a one-hot-encoded categorical distribution ( τ → 0 ) and continuous categorical densities ( τ → +∞ ) . For differentiable sampling , we can use the straight-through estimator ( Bengio et al. , 2013 ) : we use the discrete variable Uij = arg max [ 1− Ûij , Ûij ] in the forward pass , and the continuous approximation Ûij in the backward pass . Thus , sampling all the upper triangular indices of U ∈ { 0 , 1 } n×n has complexity O ( n2 ) . We recall the definition of the Gumbel-Softmax distribution in detail in App . B.1 . Differentiable permutation sampling . Similarly to an edge , a permutation Π is discrete , making differentiable sampling challenging . We describe two alternative methods which allow for differentiable permutation sampling . First , the Gumbel-Sinkhorn ( Mena et al. , 2018 ) is defined on a continuous relaxation of the permutation matrix , i.e . Π̂ ∈ [ 0 , 1 ] n×n ∼ Gumbel-Sinkhornτ ( ψ ) with ψ ∈ [ 0 , 1 ] n×n , where the temperature parameter τ also allows to interpolate between discrete and continuous distributions similarly to the Gumbel-Softmax distribution . For differentiable sampling , we can use the straight-through estimator ( Bengio et al. , 2013 ) : we use the discrete permutation Π = Hungarian ( Π̂ ) by applying the Hungarian algorithm ( Munkres , 1957 ) to compute a discrete permutation in the forward pass , and the continuous approximation Π̂ in the backward pass . Sampling a permutation matrix Π ∈ { 0 , 1 } n×n is dominated by the Hungarian algorithm and has a complexity of O ( n3 ) . We recall the definition of the Gumbel-Sinkhorn distribution in detail in App . B.2 . A second method orthogonal to the Gumbel-Sinkhorn method is to use the combination of the Gumbel-Top-k trick ( Kool et al. , 2019 ) and the SoftSort operator ( Prillo & Eisenschlos , 2020 ) which also defines a distribution on a continuous relaxation of the permutation matrix . For k = n , the Gumbel-Top-n distribution states the sorted perturbed log-probabilities , i.e . π = Sort ( ψ + G ) where parameters ψ are log-probabilities and G ∈ Rn are i.i.d . Gumbel noise , defines a distribution over component-wise permutation ( a.k.a . linear ordering without replacement ) . Instead of the Sort operator , we apply the SoftSort operator to the perturbed log-probabilities which outputs a continuous relaxation of the permutation matrix , i.e . Π̂ = SoftSort ( ψ + G ) ∈ Rn×n . For differentiable sampling , we use the straight-through estimator ( Bengio et al. , 2013 ) : we use the discrete permutation Π = arg max Π̂ by applying the ( one-hot ) argmax operator row-wise ( Prillo & Eisenschlos , 2020 ) in the forward pass , and the continuous approximation Π̂ in the backward pass . Sampling a permutation matrix Π ∈ { 0 , 1 } n×n is dominated by the SoftSort operation and has a complexity of O ( n2 ) . The permutation sampling complexity with Gumbel-Top-k combined with SoftSort is thus lower than the permutation sampling complexity with Gumbel-Sinkhorn . We recall the definition of the Gumbel-Top-k distribution and SoftSort operator in detail in App . B.3 and App . C. Differentiable DAG sampling . Given the aforementioned methods for differentiable edge and permutation sampling , we propose a new simple and valid sampling procedure for DAG sampling which consists in three steps : ( 1 ) Sample a permutation Π from a probabilistic model over permutations Pψ ( Π ) i.e . Π ∼ Pψ ( Π ) . ( 2 ) Sample an upper triangular matrix U by sampling the upper triangular elements from a probabilistic model over edges Pφ ( Uij ) i.e . Uij ∼ Pφ ( Uij ) . ( 3 ) Compute the final adjacency matrix A from the permutation matrix Π and the upper triangular matrix U i.e . A = ΠTUΠ . This procedure is capable of sampling any possible DAGs of n nodes due to Th . 1 . In practice , we propose to parametrize the distribution Pψ ( Π ) using the Gumbel-Sinkhorn or the Gumbel-Top-k trick which define valid distributions over permutations , and parametrize the distributions Pφ ( Uij ) using the Gumbel-Softmax trick which defines a valid distribution over edges . Given these parametrizations , the sampling procedure allows fast and differentiable sampling and can be implemented in a few lines of code ( see Fig . 1 ) . The total DAG sampling complexity is dominated by the permutation sampling step which has a complexity of O ( n3 ) using Gumbel-Sinkhorn and O ( n2 ) using Gumbel-Top-k combined with SoftSort . Finally , the DAG sampling procedure of DP-DAG guarantees a valid DAG output without additional pre- or post-processing at any time during training . | This paper presents a differentiable probabilistic model (DP-DAG) over DAGs that retains accuracy over comparable differentiable DAG models. In combination with variational inference for training, VI-DP-DAG demonstrates large gains in computational efficiency at the cost of an intractable exact scoring function (ie computing $P_{\phi,\psi}(A)$). The computational gains come from the factorization of the DAG distribution into the product of orderings and edges, which results in intractable scoring as all valid permutations must be marginalized over in order to score a graph. However, this is not an issue: the pathwise derivative is used during training, requiring only a differentiable sampling procedure, and evaluation does not require scoring. | SP:180a3655362d9a341066290d445f7c7fb48b2585 |
Differentiable DAG Sampling | 1 INTRODUCTION . Directed Acyclic Graphs ( DAGs ) are important mathematical objects in many machine learning tasks . For example , a direct application of DAGs is to represent causal relationships in a system of variables . In this case , variables are represented as nodes and causal relationships are represented as directed edges . Hence , DAG learning has found many applications for causal discovery in biology , economics or planning ( Pearl , 1988 ; Ramsey et al. , 2017 ; Sachs et al. , 2005 ; Zhang et al. , 2013 ) . However , DAG learning is a challenging problem for two reasons . First , while DAG learning with data from randomized and controlled experiments is the gold-standard for causal discovery , experimental data might be hard or unethical to obtain in practice . Hence , a more common but also challenging setting is DAG learning from observational data which is possible under proper conditions ( Pearl , 2009 ; Spirtes et al. , 2000 ) . Second , the number of possible DAGs scales super-exponentially with the number of variables which makes DAG learning an NP hard problem ( Chickering et al. , 2012 ; Robinson , 1973 ) . A first traditional family of models for DAG learning are discrete score-based approaches . These approaches aim at solving the following discrete optimization problem : max G score ( X , G ) s.t . G ∈ discrete DAGs ( 1 ) where X denotes the observed data and the discrete DAGs space is composed of DAGs with unweighted ( present or absent ) edges . Examples of score functions are Bayesian Information Criteria ( BIC ) ( Chickering & Heckerman , 1997 ) or Minimum Description Length ( MDL ) ( Bouckaert , 1993 ) . Discrete approaches have two main limitations : ( 1 ) the optimization search space of discrete DAGs is large and constrained which often makes the problem intractable without further assumptions , and ( 2 ) the learning procedure is not differentiable and thus not amenable to gradient-based optimization , as done by most deep learning approaches . To mitigate these issues , a second more recent family of models for DAG learning proposes to leverage continuous optimization by using an augmented Lagrangian formulation ( Lachapelle et al. , 2020 ; Ng et al. , 2019 ; Wehenkel & Louppe , 2021 ; Yu et al. , 2019 ; Zheng et al. , 2018 ) . These approaches aim at solving the following optimization problem : max G score ( X , G ) s.t . h ( G ) = 0 ( 2 ) where h ( G ) is a smooth function over weighted graphs G which is equal to zero when the graph G satisfies the DAG constraints . Standard examples of score functions are ( negative ) cross-entropy ( Yu et al. , 2019 ) or Mean Squared Error ( MSE ) ( Ng et al. , 2019 ) when reconstructing the data . During optimization , G is a weighted graph where the discreteness and acyclicity constraints are relaxed . Hence , these continuous optimization approaches have two main limitations : ( 1 ) the augmented Lagrangian optimization is computationally expensive as it requires multiple complex dual ascent iterations , and ( 2 ) the discrete and acyclicity constraints are relaxed during optimization which does not guarantee valid discrete DAGs without non-differentiable pre- and post-processing as proposed by Causal Additive Model ( CAM ) ( Bühlmann et al. , 2014 ) . For a more comprehensive description , we recall the augmented Lagrangian optimization method in detail in App . A . In this paper , we focus on differentiable DAG learning methods and make the following contributions : • We propose a new probabilistic model over DAGs ( DP-DAG ) which is capable of fast and differentiable sampling . DP-DAG can be implemented in few lines of code using GumbelSinkhorn , Gumbel-Top-k and Gumbel-Softmax distributions to parametrize differentiable sampling over permutations and edges ( see Fig . 1 ) . • We propose a new method for DAG learning from observational data ( VI-DP-DAG ) which instantiates a general probabilistic formulation for DAG learning with DP-DAG and variational inference . VI-DP-DAG guarantees valid DAG outputs at any time during training . • We show in our experiments on established synthetic and real datasets that DP-DAG outperforms other differentiable DAG learning baselines for DAG structure and causal mechanisms learning while training one order of magnitude faster . 2 RELATED WORK . We differentiate between three types of DAG learning approaches : the discrete optimization approaches , the continuous optimization approaches and the sampling-based approaches . We refer to the survey ( Vowels et al. , 2021 ) for a more detailed overview of DAG learning approaches . Discrete optimization . First , to make the search space more tractable , discrete optimization approaches modify the original problem with additional assumptions on DAG treewidth ( Nie et al. , 2014 ; Scanagatta et al. , 2016 ) , ancestral constraints ( Chen et al. , 2016 ) or on the number of parents of each variable ( Viinikka et al. , 2020 ) . Other methods are based on greedy search ( Chickering , 2002 ) or discrete optimization of the topological order ( Park & Klabjan , 2017 ; Scanagatta et al. , 2015 ; Teyssier & Koller , 2005 ) . Another type of discrete optimization approaches are constraint-based methods . These methods explore the discrete DAG space by performing independence tests between observed variables ( Bühlmann et al. , 2014 ; Spirtes et al. , 2001 ) . Continuous optimization . Second , continuous approaches usually relax the discreteness and acyclicity constraints by using an augmented Lagrangian formulation of the optimization problem ( Lachapelle et al. , 2020 ; Ng et al. , 2019 ; Wehenkel & Louppe , 2021 ; Yu et al. , 2019 ; Zheng et al. , 2018 ) . Some approaches define the DAG structure from neural network weights ( Lachapelle et al. , 2020 ; Zheng et al. , 2018 ) while other approaches directly learn the DAG adjacency matrix ( Ng et al. , 2019 ; Wehenkel & Louppe , 2021 ; Yu et al. , 2019 ) . In contrast to these methods , VI-DP-DAG learns a probabilistic model over the DAG structure . Further , these approaches penalize DAG constraints violation in the augmented Lagrangian formulation but do not guarantee that they are fulfilled during training . Recently , Yu et al . ( 2021 ) propose to complement the augmented Lagrangian optimization with a second step projecting the learned graph on admissible solutions . Hence , contrary to VI-DPDAG , most of these approaches use non-differentiable processing steps – e.g . removing cycles and spurious edges – to output valid and high-quality DAGs . Common examples of processing steps are Preliminary Neighbors Selection ( PNS ) and CAM pruning ( Bühlmann et al. , 2014 ) . Sampling-based optimization . Third , other works use DAG sampling to estimate the posterior distribution over DAGs with MCMC ( Kuipers et al. , 2020 ; Niinimäki et al. , 2011 ; 2016 ; Talvitie et al. , 2020 ; Viinikka et al. , 2020 ) . While previous works improve the quality and speed of MCMC computations by sampling ( partial ) orders or making assumptions on the number of parents per node , they are still computationally extremely expensive ( Kuipers et al. , 2020 ; Niinimäki et al. , 2011 ; 2016 ; Viinikka et al. , 2020 ) . E.g. , Viinikka et al . ( 2020 ) recommend to run MCMC methods during 12 hours to sample from the posterior distribution over DAGs with 100 nodes . In contrast , VI-DP-DAG approximates the posterior distribution over DAG edges with variational inference and can sample very fast . E.g. , our VI-DP-DAG trains in around 190 seconds and samples in less than 1 second for a DAG with 100 nodes . Further , while the construction of the MCMC chains are generally non-differentiable , our DP-DAG is capable of fully differentiable DAG learning and can leverage gradient-based optimization . Other works propose optimization of discrete problems using differentiable probabilistic distribution over various discrete objects like subsets or spanning trees but not on DAG structures ( Grathwohl et al. , 2021 ; Karalias & Loukas , 2020 ; Paulus et al. , 2020 ) . Further , recent works combine differentiable edge sampling with Gumbel trick and Lagrangian optimization but do not define valid distributions over the full DAG structure ( Brouillard et al. , 2020 ; Ng et al. , 2019 ) . In contrast , DP-DAG does not require complex Lagrangian optimization and guarantees valid DAGs solutions at any time during training . Finally , Grosse et al . ( 2021 ) explores an orthogonal direction where the search space in sequential decision making problems is represented with a DAG . 3 PROBABILISTIC MODEL OVER DAGS . A Directed Acyclic Graph ( DAG ) is a graph G = ( V , E ) with n nodes x1 , ... , xn and m directed edges which does not exhibit directed cycles . A DAG always admits a valid permutation ( or linear ordering ) π : [ [ 1 , n ] ] → [ [ 1 , n ] ] of the nodes such that a node can not have a direct edge toward a node with lower rank i.e. , π ( i ) < π ( j ) implies no directed edge from node xπ ( j ) to node xπ ( i ) . Valid permutations are often not unique . Interestingly , this property has an equivalent matrix formulation : Theorem 1 . Lets A ∈ { 0 , 1 } n be the ( a priori non-triangular ) adjacency matrix associated with an arbitrary node labelling of the DAG G. The adjacency matrix A always admits a permutation matrix Π ∈ { 0 , 1 } n×n and an upper triangular matrix U ∈ { 0 , 1 } n×n such that A = ΠTUΠ . The permutation matrix Π directly corresponds to a valid component-wise permutation π . Hence , Th . 1 simply states that the matrix U is the new adjacency matrix where the new node labels are a valid permutation of the original node labels i.e . Aij = Uπ ( i ) π ( j ) such that Uπ ( i ) π ( j ) = 0 if π ( i ) < π ( j ) . The decomposition of the DAG adjacency matrix A in Th . 1 is generally not unique as a DAG G generally admits multiple valid linear permutations Π . Hence , we define the following probabilistic model over DAGs ( DP-DAG ) based on the adjacency matrix decomposition in Th . 1 : P ( A ) = ∑ Π∈P ( G ) , U∈Un P ( U ) P ( Π ) s.t . A = ΠTUΠ ( 3 ) where P ( G ) is the set of valid permutation matrices for the DAG G , Un is the space of binary upper-triangular matrices of size n× n , P ( Π ) is the distribution over permutations and P ( U ) is the distribution over edges consistent with the sampled permutation of the nodes . Note that the number of valid permutations |P ( G ) | can be exponentially large in the number of nodes which makes the exact computation of the probability of a given DAG adjacency matrix P ( A ) intractable for large graphs . However , DAG sampling does not require any enumeration of the valid linear permutations . Indeed , we propose a new differentiable DAG sampling method ( i.e . A ∼ Pφ , ψ ( A ) ) based on differentiable edge sampling ( i.e . U ∼ Pφ ( U ) ) and differentiable permutation sampling ( i.e . Π ∼ Pψ ( Π ) ) . The variables ψ and φ denote the parameters of the edge and permutation distributions . Differentiable edge sampling . The Bernoulli distribution is a well-suited distribution to model randomness over a discrete binary variable like an edge Uij ∈ { 0 , 1 } ∼ Ber ( p ) where p ∈ [ 0 , 1 ] is the probability for the edge to exist . Unfortunately , standard random sampling operations from the Bernoulli distribution are not differentiable . In contrast , the Gumbel-Softmax distribution allows for differentiable sampling and approximates the Bernoulli distribution ( Jang et al. , 2017 ) . The GumbelSoftmax distribution is defined on continuous variables , i.e . Ûij ∈ [ 0 , 1 ] ∼ Gumbel-Softmaxτ ( φ ) with φ ∈ [ 0 , 1 ] , where the temperature parameter τ allows to interpolate between a one-hot-encoded categorical distribution ( τ → 0 ) and continuous categorical densities ( τ → +∞ ) . For differentiable sampling , we can use the straight-through estimator ( Bengio et al. , 2013 ) : we use the discrete variable Uij = arg max [ 1− Ûij , Ûij ] in the forward pass , and the continuous approximation Ûij in the backward pass . Thus , sampling all the upper triangular indices of U ∈ { 0 , 1 } n×n has complexity O ( n2 ) . We recall the definition of the Gumbel-Softmax distribution in detail in App . B.1 . Differentiable permutation sampling . Similarly to an edge , a permutation Π is discrete , making differentiable sampling challenging . We describe two alternative methods which allow for differentiable permutation sampling . First , the Gumbel-Sinkhorn ( Mena et al. , 2018 ) is defined on a continuous relaxation of the permutation matrix , i.e . Π̂ ∈ [ 0 , 1 ] n×n ∼ Gumbel-Sinkhornτ ( ψ ) with ψ ∈ [ 0 , 1 ] n×n , where the temperature parameter τ also allows to interpolate between discrete and continuous distributions similarly to the Gumbel-Softmax distribution . For differentiable sampling , we can use the straight-through estimator ( Bengio et al. , 2013 ) : we use the discrete permutation Π = Hungarian ( Π̂ ) by applying the Hungarian algorithm ( Munkres , 1957 ) to compute a discrete permutation in the forward pass , and the continuous approximation Π̂ in the backward pass . Sampling a permutation matrix Π ∈ { 0 , 1 } n×n is dominated by the Hungarian algorithm and has a complexity of O ( n3 ) . We recall the definition of the Gumbel-Sinkhorn distribution in detail in App . B.2 . A second method orthogonal to the Gumbel-Sinkhorn method is to use the combination of the Gumbel-Top-k trick ( Kool et al. , 2019 ) and the SoftSort operator ( Prillo & Eisenschlos , 2020 ) which also defines a distribution on a continuous relaxation of the permutation matrix . For k = n , the Gumbel-Top-n distribution states the sorted perturbed log-probabilities , i.e . π = Sort ( ψ + G ) where parameters ψ are log-probabilities and G ∈ Rn are i.i.d . Gumbel noise , defines a distribution over component-wise permutation ( a.k.a . linear ordering without replacement ) . Instead of the Sort operator , we apply the SoftSort operator to the perturbed log-probabilities which outputs a continuous relaxation of the permutation matrix , i.e . Π̂ = SoftSort ( ψ + G ) ∈ Rn×n . For differentiable sampling , we use the straight-through estimator ( Bengio et al. , 2013 ) : we use the discrete permutation Π = arg max Π̂ by applying the ( one-hot ) argmax operator row-wise ( Prillo & Eisenschlos , 2020 ) in the forward pass , and the continuous approximation Π̂ in the backward pass . Sampling a permutation matrix Π ∈ { 0 , 1 } n×n is dominated by the SoftSort operation and has a complexity of O ( n2 ) . The permutation sampling complexity with Gumbel-Top-k combined with SoftSort is thus lower than the permutation sampling complexity with Gumbel-Sinkhorn . We recall the definition of the Gumbel-Top-k distribution and SoftSort operator in detail in App . B.3 and App . C. Differentiable DAG sampling . Given the aforementioned methods for differentiable edge and permutation sampling , we propose a new simple and valid sampling procedure for DAG sampling which consists in three steps : ( 1 ) Sample a permutation Π from a probabilistic model over permutations Pψ ( Π ) i.e . Π ∼ Pψ ( Π ) . ( 2 ) Sample an upper triangular matrix U by sampling the upper triangular elements from a probabilistic model over edges Pφ ( Uij ) i.e . Uij ∼ Pφ ( Uij ) . ( 3 ) Compute the final adjacency matrix A from the permutation matrix Π and the upper triangular matrix U i.e . A = ΠTUΠ . This procedure is capable of sampling any possible DAGs of n nodes due to Th . 1 . In practice , we propose to parametrize the distribution Pψ ( Π ) using the Gumbel-Sinkhorn or the Gumbel-Top-k trick which define valid distributions over permutations , and parametrize the distributions Pφ ( Uij ) using the Gumbel-Softmax trick which defines a valid distribution over edges . Given these parametrizations , the sampling procedure allows fast and differentiable sampling and can be implemented in a few lines of code ( see Fig . 1 ) . The total DAG sampling complexity is dominated by the permutation sampling step which has a complexity of O ( n3 ) using Gumbel-Sinkhorn and O ( n2 ) using Gumbel-Top-k combined with SoftSort . Finally , the DAG sampling procedure of DP-DAG guarantees a valid DAG output without additional pre- or post-processing at any time during training . | The authors proposed an algorithm for sampling DAGs that is suited for continuous optimization. The sampling algorithm has two main steps: In the first step, a causal order over the variables is selected. In the second step, edges are sampled based on the selected order. Moreover, based on this algorithm, they proposed a method in order to learn the causal structure from the observational data. The causal structure learning algorithm is guaranteed to output a DAG at any time and it is not required any pre- or post-processing unlike previous work. | SP:180a3655362d9a341066290d445f7c7fb48b2585 |
Differentiable DAG Sampling | 1 INTRODUCTION . Directed Acyclic Graphs ( DAGs ) are important mathematical objects in many machine learning tasks . For example , a direct application of DAGs is to represent causal relationships in a system of variables . In this case , variables are represented as nodes and causal relationships are represented as directed edges . Hence , DAG learning has found many applications for causal discovery in biology , economics or planning ( Pearl , 1988 ; Ramsey et al. , 2017 ; Sachs et al. , 2005 ; Zhang et al. , 2013 ) . However , DAG learning is a challenging problem for two reasons . First , while DAG learning with data from randomized and controlled experiments is the gold-standard for causal discovery , experimental data might be hard or unethical to obtain in practice . Hence , a more common but also challenging setting is DAG learning from observational data which is possible under proper conditions ( Pearl , 2009 ; Spirtes et al. , 2000 ) . Second , the number of possible DAGs scales super-exponentially with the number of variables which makes DAG learning an NP hard problem ( Chickering et al. , 2012 ; Robinson , 1973 ) . A first traditional family of models for DAG learning are discrete score-based approaches . These approaches aim at solving the following discrete optimization problem : max G score ( X , G ) s.t . G ∈ discrete DAGs ( 1 ) where X denotes the observed data and the discrete DAGs space is composed of DAGs with unweighted ( present or absent ) edges . Examples of score functions are Bayesian Information Criteria ( BIC ) ( Chickering & Heckerman , 1997 ) or Minimum Description Length ( MDL ) ( Bouckaert , 1993 ) . Discrete approaches have two main limitations : ( 1 ) the optimization search space of discrete DAGs is large and constrained which often makes the problem intractable without further assumptions , and ( 2 ) the learning procedure is not differentiable and thus not amenable to gradient-based optimization , as done by most deep learning approaches . To mitigate these issues , a second more recent family of models for DAG learning proposes to leverage continuous optimization by using an augmented Lagrangian formulation ( Lachapelle et al. , 2020 ; Ng et al. , 2019 ; Wehenkel & Louppe , 2021 ; Yu et al. , 2019 ; Zheng et al. , 2018 ) . These approaches aim at solving the following optimization problem : max G score ( X , G ) s.t . h ( G ) = 0 ( 2 ) where h ( G ) is a smooth function over weighted graphs G which is equal to zero when the graph G satisfies the DAG constraints . Standard examples of score functions are ( negative ) cross-entropy ( Yu et al. , 2019 ) or Mean Squared Error ( MSE ) ( Ng et al. , 2019 ) when reconstructing the data . During optimization , G is a weighted graph where the discreteness and acyclicity constraints are relaxed . Hence , these continuous optimization approaches have two main limitations : ( 1 ) the augmented Lagrangian optimization is computationally expensive as it requires multiple complex dual ascent iterations , and ( 2 ) the discrete and acyclicity constraints are relaxed during optimization which does not guarantee valid discrete DAGs without non-differentiable pre- and post-processing as proposed by Causal Additive Model ( CAM ) ( Bühlmann et al. , 2014 ) . For a more comprehensive description , we recall the augmented Lagrangian optimization method in detail in App . A . In this paper , we focus on differentiable DAG learning methods and make the following contributions : • We propose a new probabilistic model over DAGs ( DP-DAG ) which is capable of fast and differentiable sampling . DP-DAG can be implemented in few lines of code using GumbelSinkhorn , Gumbel-Top-k and Gumbel-Softmax distributions to parametrize differentiable sampling over permutations and edges ( see Fig . 1 ) . • We propose a new method for DAG learning from observational data ( VI-DP-DAG ) which instantiates a general probabilistic formulation for DAG learning with DP-DAG and variational inference . VI-DP-DAG guarantees valid DAG outputs at any time during training . • We show in our experiments on established synthetic and real datasets that DP-DAG outperforms other differentiable DAG learning baselines for DAG structure and causal mechanisms learning while training one order of magnitude faster . 2 RELATED WORK . We differentiate between three types of DAG learning approaches : the discrete optimization approaches , the continuous optimization approaches and the sampling-based approaches . We refer to the survey ( Vowels et al. , 2021 ) for a more detailed overview of DAG learning approaches . Discrete optimization . First , to make the search space more tractable , discrete optimization approaches modify the original problem with additional assumptions on DAG treewidth ( Nie et al. , 2014 ; Scanagatta et al. , 2016 ) , ancestral constraints ( Chen et al. , 2016 ) or on the number of parents of each variable ( Viinikka et al. , 2020 ) . Other methods are based on greedy search ( Chickering , 2002 ) or discrete optimization of the topological order ( Park & Klabjan , 2017 ; Scanagatta et al. , 2015 ; Teyssier & Koller , 2005 ) . Another type of discrete optimization approaches are constraint-based methods . These methods explore the discrete DAG space by performing independence tests between observed variables ( Bühlmann et al. , 2014 ; Spirtes et al. , 2001 ) . Continuous optimization . Second , continuous approaches usually relax the discreteness and acyclicity constraints by using an augmented Lagrangian formulation of the optimization problem ( Lachapelle et al. , 2020 ; Ng et al. , 2019 ; Wehenkel & Louppe , 2021 ; Yu et al. , 2019 ; Zheng et al. , 2018 ) . Some approaches define the DAG structure from neural network weights ( Lachapelle et al. , 2020 ; Zheng et al. , 2018 ) while other approaches directly learn the DAG adjacency matrix ( Ng et al. , 2019 ; Wehenkel & Louppe , 2021 ; Yu et al. , 2019 ) . In contrast to these methods , VI-DP-DAG learns a probabilistic model over the DAG structure . Further , these approaches penalize DAG constraints violation in the augmented Lagrangian formulation but do not guarantee that they are fulfilled during training . Recently , Yu et al . ( 2021 ) propose to complement the augmented Lagrangian optimization with a second step projecting the learned graph on admissible solutions . Hence , contrary to VI-DPDAG , most of these approaches use non-differentiable processing steps – e.g . removing cycles and spurious edges – to output valid and high-quality DAGs . Common examples of processing steps are Preliminary Neighbors Selection ( PNS ) and CAM pruning ( Bühlmann et al. , 2014 ) . Sampling-based optimization . Third , other works use DAG sampling to estimate the posterior distribution over DAGs with MCMC ( Kuipers et al. , 2020 ; Niinimäki et al. , 2011 ; 2016 ; Talvitie et al. , 2020 ; Viinikka et al. , 2020 ) . While previous works improve the quality and speed of MCMC computations by sampling ( partial ) orders or making assumptions on the number of parents per node , they are still computationally extremely expensive ( Kuipers et al. , 2020 ; Niinimäki et al. , 2011 ; 2016 ; Viinikka et al. , 2020 ) . E.g. , Viinikka et al . ( 2020 ) recommend to run MCMC methods during 12 hours to sample from the posterior distribution over DAGs with 100 nodes . In contrast , VI-DP-DAG approximates the posterior distribution over DAG edges with variational inference and can sample very fast . E.g. , our VI-DP-DAG trains in around 190 seconds and samples in less than 1 second for a DAG with 100 nodes . Further , while the construction of the MCMC chains are generally non-differentiable , our DP-DAG is capable of fully differentiable DAG learning and can leverage gradient-based optimization . Other works propose optimization of discrete problems using differentiable probabilistic distribution over various discrete objects like subsets or spanning trees but not on DAG structures ( Grathwohl et al. , 2021 ; Karalias & Loukas , 2020 ; Paulus et al. , 2020 ) . Further , recent works combine differentiable edge sampling with Gumbel trick and Lagrangian optimization but do not define valid distributions over the full DAG structure ( Brouillard et al. , 2020 ; Ng et al. , 2019 ) . In contrast , DP-DAG does not require complex Lagrangian optimization and guarantees valid DAGs solutions at any time during training . Finally , Grosse et al . ( 2021 ) explores an orthogonal direction where the search space in sequential decision making problems is represented with a DAG . 3 PROBABILISTIC MODEL OVER DAGS . A Directed Acyclic Graph ( DAG ) is a graph G = ( V , E ) with n nodes x1 , ... , xn and m directed edges which does not exhibit directed cycles . A DAG always admits a valid permutation ( or linear ordering ) π : [ [ 1 , n ] ] → [ [ 1 , n ] ] of the nodes such that a node can not have a direct edge toward a node with lower rank i.e. , π ( i ) < π ( j ) implies no directed edge from node xπ ( j ) to node xπ ( i ) . Valid permutations are often not unique . Interestingly , this property has an equivalent matrix formulation : Theorem 1 . Lets A ∈ { 0 , 1 } n be the ( a priori non-triangular ) adjacency matrix associated with an arbitrary node labelling of the DAG G. The adjacency matrix A always admits a permutation matrix Π ∈ { 0 , 1 } n×n and an upper triangular matrix U ∈ { 0 , 1 } n×n such that A = ΠTUΠ . The permutation matrix Π directly corresponds to a valid component-wise permutation π . Hence , Th . 1 simply states that the matrix U is the new adjacency matrix where the new node labels are a valid permutation of the original node labels i.e . Aij = Uπ ( i ) π ( j ) such that Uπ ( i ) π ( j ) = 0 if π ( i ) < π ( j ) . The decomposition of the DAG adjacency matrix A in Th . 1 is generally not unique as a DAG G generally admits multiple valid linear permutations Π . Hence , we define the following probabilistic model over DAGs ( DP-DAG ) based on the adjacency matrix decomposition in Th . 1 : P ( A ) = ∑ Π∈P ( G ) , U∈Un P ( U ) P ( Π ) s.t . A = ΠTUΠ ( 3 ) where P ( G ) is the set of valid permutation matrices for the DAG G , Un is the space of binary upper-triangular matrices of size n× n , P ( Π ) is the distribution over permutations and P ( U ) is the distribution over edges consistent with the sampled permutation of the nodes . Note that the number of valid permutations |P ( G ) | can be exponentially large in the number of nodes which makes the exact computation of the probability of a given DAG adjacency matrix P ( A ) intractable for large graphs . However , DAG sampling does not require any enumeration of the valid linear permutations . Indeed , we propose a new differentiable DAG sampling method ( i.e . A ∼ Pφ , ψ ( A ) ) based on differentiable edge sampling ( i.e . U ∼ Pφ ( U ) ) and differentiable permutation sampling ( i.e . Π ∼ Pψ ( Π ) ) . The variables ψ and φ denote the parameters of the edge and permutation distributions . Differentiable edge sampling . The Bernoulli distribution is a well-suited distribution to model randomness over a discrete binary variable like an edge Uij ∈ { 0 , 1 } ∼ Ber ( p ) where p ∈ [ 0 , 1 ] is the probability for the edge to exist . Unfortunately , standard random sampling operations from the Bernoulli distribution are not differentiable . In contrast , the Gumbel-Softmax distribution allows for differentiable sampling and approximates the Bernoulli distribution ( Jang et al. , 2017 ) . The GumbelSoftmax distribution is defined on continuous variables , i.e . Ûij ∈ [ 0 , 1 ] ∼ Gumbel-Softmaxτ ( φ ) with φ ∈ [ 0 , 1 ] , where the temperature parameter τ allows to interpolate between a one-hot-encoded categorical distribution ( τ → 0 ) and continuous categorical densities ( τ → +∞ ) . For differentiable sampling , we can use the straight-through estimator ( Bengio et al. , 2013 ) : we use the discrete variable Uij = arg max [ 1− Ûij , Ûij ] in the forward pass , and the continuous approximation Ûij in the backward pass . Thus , sampling all the upper triangular indices of U ∈ { 0 , 1 } n×n has complexity O ( n2 ) . We recall the definition of the Gumbel-Softmax distribution in detail in App . B.1 . Differentiable permutation sampling . Similarly to an edge , a permutation Π is discrete , making differentiable sampling challenging . We describe two alternative methods which allow for differentiable permutation sampling . First , the Gumbel-Sinkhorn ( Mena et al. , 2018 ) is defined on a continuous relaxation of the permutation matrix , i.e . Π̂ ∈ [ 0 , 1 ] n×n ∼ Gumbel-Sinkhornτ ( ψ ) with ψ ∈ [ 0 , 1 ] n×n , where the temperature parameter τ also allows to interpolate between discrete and continuous distributions similarly to the Gumbel-Softmax distribution . For differentiable sampling , we can use the straight-through estimator ( Bengio et al. , 2013 ) : we use the discrete permutation Π = Hungarian ( Π̂ ) by applying the Hungarian algorithm ( Munkres , 1957 ) to compute a discrete permutation in the forward pass , and the continuous approximation Π̂ in the backward pass . Sampling a permutation matrix Π ∈ { 0 , 1 } n×n is dominated by the Hungarian algorithm and has a complexity of O ( n3 ) . We recall the definition of the Gumbel-Sinkhorn distribution in detail in App . B.2 . A second method orthogonal to the Gumbel-Sinkhorn method is to use the combination of the Gumbel-Top-k trick ( Kool et al. , 2019 ) and the SoftSort operator ( Prillo & Eisenschlos , 2020 ) which also defines a distribution on a continuous relaxation of the permutation matrix . For k = n , the Gumbel-Top-n distribution states the sorted perturbed log-probabilities , i.e . π = Sort ( ψ + G ) where parameters ψ are log-probabilities and G ∈ Rn are i.i.d . Gumbel noise , defines a distribution over component-wise permutation ( a.k.a . linear ordering without replacement ) . Instead of the Sort operator , we apply the SoftSort operator to the perturbed log-probabilities which outputs a continuous relaxation of the permutation matrix , i.e . Π̂ = SoftSort ( ψ + G ) ∈ Rn×n . For differentiable sampling , we use the straight-through estimator ( Bengio et al. , 2013 ) : we use the discrete permutation Π = arg max Π̂ by applying the ( one-hot ) argmax operator row-wise ( Prillo & Eisenschlos , 2020 ) in the forward pass , and the continuous approximation Π̂ in the backward pass . Sampling a permutation matrix Π ∈ { 0 , 1 } n×n is dominated by the SoftSort operation and has a complexity of O ( n2 ) . The permutation sampling complexity with Gumbel-Top-k combined with SoftSort is thus lower than the permutation sampling complexity with Gumbel-Sinkhorn . We recall the definition of the Gumbel-Top-k distribution and SoftSort operator in detail in App . B.3 and App . C. Differentiable DAG sampling . Given the aforementioned methods for differentiable edge and permutation sampling , we propose a new simple and valid sampling procedure for DAG sampling which consists in three steps : ( 1 ) Sample a permutation Π from a probabilistic model over permutations Pψ ( Π ) i.e . Π ∼ Pψ ( Π ) . ( 2 ) Sample an upper triangular matrix U by sampling the upper triangular elements from a probabilistic model over edges Pφ ( Uij ) i.e . Uij ∼ Pφ ( Uij ) . ( 3 ) Compute the final adjacency matrix A from the permutation matrix Π and the upper triangular matrix U i.e . A = ΠTUΠ . This procedure is capable of sampling any possible DAGs of n nodes due to Th . 1 . In practice , we propose to parametrize the distribution Pψ ( Π ) using the Gumbel-Sinkhorn or the Gumbel-Top-k trick which define valid distributions over permutations , and parametrize the distributions Pφ ( Uij ) using the Gumbel-Softmax trick which defines a valid distribution over edges . Given these parametrizations , the sampling procedure allows fast and differentiable sampling and can be implemented in a few lines of code ( see Fig . 1 ) . The total DAG sampling complexity is dominated by the permutation sampling step which has a complexity of O ( n3 ) using Gumbel-Sinkhorn and O ( n2 ) using Gumbel-Top-k combined with SoftSort . Finally , the DAG sampling procedure of DP-DAG guarantees a valid DAG output without additional pre- or post-processing at any time during training . | Using the recently developed differentiable sampling techniques for discrete variables (e.g. Gumbel-Softmax) and variable orderings (e.g. Gumbel-Sinkhorn and Gumbel-Top-k), this paper proposes a differentiable DAG sampling strategy, and applies it to the problem of DAG learning. | SP:180a3655362d9a341066290d445f7c7fb48b2585 |
Conditional Object-Centric Learning from Video | 1 INTRODUCTION . Humans understand the world in terms of separate objects ( Kahneman et al. , 1992 ; Spelke & Kinzler , 2007 ) , which serve as compositional building blocks that can be processed independently and recombined . Such a compositional model of the world forms the foundation for high-level cognitive abilities such as language , causal reasoning , mathematics , planning , etc . and is crucial for generalizing in predictable and systematic ways . Object-centric representations have the potential to greatly improve sample efficiency , robustness , generalization to new tasks , and interpretability of machine learning algorithms ( Greff et al. , 2020 ) . In this work , we focus on the aspect of modeling motion of objects from video , because of its synergistic relationship with object-centric representations : On the one hand , objects support learning an efficient dynamics model by factorizing the scene into approximately independent parts with only sparse interactions . Conversely , motion provides a powerful cue for which inputs should be grouped together , and is thus an important tool for learning about objects . Unsupervised multi-object representation learning has recently made significant progress both on images ( e.g . Burgess et al. , 2019 ; Greff et al. , 2019 ; Lin et al. , 2020 ; Locatello et al. , 2020 ) and on video ( e.g . Veerapaneni et al. , 2020 ; Weis et al. , 2020 ; Jiang et al. , 2020 ) . By incorporating objectcentric inductive biases , these methods learn to segment and represent objects from the statistical structure of the data alone without the need for supervision . Despite promising results these methods are currently limited by two important problems : Firstly , they are restricted to toy data like moving 2D sprites or very simple 3D scenes and generally fail at more realistic data with complex textures ( Greff et al. , 2019 ; Harley et al. , 2021 ; Karazija et al. , 2021 ) . And secondly , it is not entirely clear how to interface with these models both during training and inference . The notion of an object is ambiguous and task-dependent , and the segmentation learned by these models does not necessarily align with the tasks of interest . The model might , for example , over-segment a desired object into distinct parts , or alternatively fail to segment into the desired parts . Ideally , we would like to be able to provide the model with hints as to the desired level of granularity during training , and flexibly query the model during inference to request that the model detects and tracks a particular object or a part thereof . In this paper we introduce a sequential extension of Slot Attention ( Locatello et al. , 2020 ) that we call Slot Attention for Video ( SAVi ) to tackle the problem of unsupervised / weakly-supervised multiobject segmentation and tracking in video data . We demonstrate that 1 ) using optical flow prediction as a self-supervised objective and 2 ) providing a small set of abstract hints such as the center of mass position for objects as conditional inputs in the first frame suffices to direct the decomposition process in complex video scenes without otherwise requiring any priors on the size of objects or on the information content of their representations . We show successful segmentation and tracking for synthetic video data with significantly higher realism and visual complexity than the datasets used in prior work on unsupervised object representation learning . These results are robust with respect to noise on both the optical flow signal and the conditioning , and that they generalize almost perfectly to longer sequences , novel objects , and novel backgrounds . 2 SLOT ATTENTION FOR VIDEO ( SAVI ) . We introduce a sequential extension of the Slot Attention ( Locatello et al. , 2020 ) architecture to video data , which we call SAVi . Inspired by predictor-corrector methods for integration of ordinary differential equations , SAVi performs two steps for each observed video frame : a prediction and a correction step . The correction step uses Slot Attention to update ( or correct ) the set of slot representations based on slot-normalized cross-attention with the inputs . The prediction step uses self-attention among the slots to allow for modeling of temporal dynamics and object interactions . The output of the predictor is then used to initialize the corrector at the next time step , thus allowing the model to consistently track objects over time . Importantly , both of these steps are permutation equivariant and thus are able to preserve slot symmetry . See Figure 1 for a schematic overview of the SAVi architecture . Encoder For each time-step t ∈ { 1 . . . T } the corresponding video frame xt is first passed through a small convolutional neural network ( CNN ) encoder ( here , a stack of five convolutional layers with ReLUs ) , where we concatenate a linear positional encoding at the second-to-last layer . The resulting grid of visual features is flattened into a set of vectors ht = fenc ( xt ) ∈ RN×Denc , where N is the size of the flattened grid ( i.e. , width * height ) and Denc is the dimensionality of the CNN feature maps . Afterwards , each vector is independently passed through a multi-layer perceptron ( MLP ) . Slot Initialization SAVi maintains K slots , each of which can represent a part of the input such as objects or parts thereof . We denote the set of slot representations at time t as St = [ s1t , . . . , sKt ] ∈ RK×D , where we use the calligraphic font to indicate that any operation on these sets is equivariant ( or invariant ) w.r.t . permutation of their elements . In other words , the ordering of the slots carries no information and they can be freely permuted ( in a consistent manner across all time steps ) without changing the model output . We consider two types of initializers : conditional and unconditional . In the conditional case , we encode the conditional input either via a simple MLP ( in the case of bounding boxes or center of mass coordinates ) or via a CNN ( in the case of segmentation masks ) . For slots for which there is no conditioning information available ( e.g . if K is larger than the number of objects ) , we set the conditional input to a fixed value ( e.g. , ‘ −1 ’ for bounding box coordinates ) . The encoded conditional input forms the initial slot representation for SAVi . In the unconditional case , we either randomly initialize slots by sampling from a Gaussian distribution independently for each video ( both at training and at test time ) , or by learning a set of initial slot vectors . Corrector The task of the corrector is to update the slot representations based on the visual features from the encoder . In SAVi this is done using the iterative attention mechanism introduced in Slot Attention ( Locatello et al. , 2020 ) . Different from a regular cross-attention mechanism ( e.g . Vaswani et al. , 2017 ) which is normalized over the inputs , Slot Attention encourages decomposition of the input into multiple slots via softmax-normalization over the output ( i.e . the slots ) , which makes it an appealing choice for our video decomposition architecture . When using a single iteration of Slot Attention , the corrector update takes the following form : Ut = 1 Zt N∑ n=1 At , n v ( ht , n ) ∈ RK×D , At = softmax K ( 1√ D k ( ht ) · q ( St ) T ) ∈ RN×K , ( 1 ) where Zt = ∑N n=1At , n and denotes the Hadamard product . k , q , v are learned linear projections that map to a common dimension D. We apply LayerNorm ( Ba et al. , 2016 ) before each projection . The slot representations are then individually updated using Gated Recurrent Units ( Cho et al. , 2014 ) as ŝkt = GRU ( u k t , s k t ) . Alternatively , the attention step ( followed by the GRU update ) can be iterated multiple times with shared parameters per frame of the input video . For added expressiveness , we apply an MLP with residual connection ŝkt ← ŝkt +MLP ( LN ( ŝkt ) ) and LayerNorm ( LN ) after the GRU when using multiple Slot Attention iterations , following Locatello et al . ( 2020 ) . Predictor The predictor takes the role of a transition function to model temporal dynamics , including interactions between slots . To preserve permutation equivariance , we use a Transformer encoder ( Vaswani et al. , 2017 ) . It allows for modeling of independent object dynamics as well as information exchange between slots via self-attention , while being more memory efficient than GNN-based models such as the Interaction Network ( Battaglia et al. , 2016 ) . Slots are updated as follows : St+1 = LN ( MLP ( S̃t ) + S̃t ) , S̃t = LN ( MultiHeadSelfAttn ( Ŝt ) + Ŝt ) . ( 2 ) For MultiHeadSelfAttn we use the default multi-head dot-product attention mechanism from Vaswani et al . ( 2017 ) . We apply LayerNorm ( LN ) after each residual connection . Decoder The network output should be permutation equivariant ( for per-slot outputs ) or invariant ( for global outputs ) with respect to the slots . Slots can be read out either after application of the corrector or after the predictor ( transition model ) . We decode slot representations after application of the corrector using a slot-wise Spatial Broadcast Decoder ( Watters et al. , 2019 ) to produce per-slot RGB predictions of the optical flow ( or reconstructed frame ) and an alpha mask . The alpha mask is normalized across slots via a softmax and used to perform a weighted sum over the slot-wise RGB reconstruction to arrive at a combined reconstructed frame : yt = K∑ k=1 mkt ykt , mt = softmax K ( m̂kt ) , m̂kt , y k t = fdec ( ŝkt ) . ( 3 ) Training Our sole prediction target is optical flow for each individual video frame , which we represent as RGB images using the default conversion in the literature ( Sun et al. , 2018 ) . Alternatively , our framework also supports prediction of other image-shaped targets , such as reconstruction of the original input frame . We minimize the pixel-wise squared reconstruction error ( averaged over the batch ) , summed over both the temporal and spatial dimensions : Lrec = T∑ t=1 ‖yt − ytruet ‖2 . ( 4 ) 3 RELATED WORK . Object-centric representation learning There is a rich literature on learning object representations from static scenes ( Greff et al. , 2016 ; Eslami et al. , 2016 ; Greff et al. , 2017 ; 2019 ; Burgess et al. , 2019 ; Engelcke et al. , 2020 ; Crawford & Pineau , 2019 ; Lin et al. , 2020 ; Locatello et al. , 2020 ) or videos ( van Steenkiste et al. , 2018 ; Kosiorek et al. , 2018 ; Stelzner et al. , 2019 ; Kipf et al. , 2020 ; Crawford & Pineau , 2020 ; Creswell et al. , 2021 ) without explicit supervision . PSGNet ( Bear et al. , 2020 ) learns to decompose static images or individual frames from a video into hierarchical scene graphs using motion information estimated from neighboring video frames . Most closely related to our work are sequential object-centric models for videos and dynamic environments , such as OP3 ( Veerapaneni et al. , 2020 ) , R-SQAIR ( Stanić & Schmidhuber , 2019 ) , ViMON ( Weis et al. , 2020 ) , and SCALOR ( Jiang et al. , 2020 ) , which learn an internal motion model for each object . SIMONe ( Kabra et al. , 2021 ) auto-encodes an entire video in parallel and learns temporally-abstracted representations of objects . OP3 ( Veerapaneni et al. , 2020 ) uses the same decoder as SAVi and a related dynamics model , but a less efficient inference process compared to Slot Attention . In an attempt to bridge the gap to visually richer and more realistic environments , recent works in object-centric representation learning have explored integration of inductive biases related to 3D scene geometry , both for static scenes ( Chen et al. , 2020 ; Stelzner et al. , 2021 ) and for videos ( Du et al. , 2021 ; Henderson & Lampert , 2020 ; Harley et al. , 2021 ) . This is largely orthogonal to our approach of utilizing conditioning and optical flow . A recent related method , FlowCaps ( Sabour et al. , 2020 ) , similarly proposed to use optical flow in a multi-object model . FlowCaps uses capsules ( Sabour et al. , 2017 ; Hinton et al. , 2018 ) instead of a slot-based representation and assumes specialization of individual capsules to objects or parts of a certain appearance , making it unsuitable for environments that contain a large variety of object types . Using a slot-based , exchangeable representation of objects allows SAVi to represent a diverse range of objects and generalize to novel objects at test time . We discuss further recent related works on attention-based modular architectures ( Santoro et al. , 2018 ; Goyal et al. , 2021a ; b ; c ) , object-centric models for dynamic visual reasoning ( Yi et al. , 2020 ; Ding et al. , 2021a ; Bar et al. , 2021 ) , and supervised attention-based object-centric models ( Fuchs et al. , 2019 ; Carion et al. , 2020 ; Kamath et al. , 2021 ; Meinhardt et al. , 2021 ) in Appendix A.1 . Video object segmentation and tracking The conditional tasks we consider in our work are closely related to the computer vision task of semi-supervised video object segmentation ( VOS ) , where segmentation masks are provided for the first video frame during evaluation . Different from the typical setting , which is addressed by supervised learning on fully annotated videos or related datasets ( e.g . Caelles et al. , 2017 ; Luiten et al. , 2018 ) , we consider the problem where models do not have access to any supervised information beyond the conditioning information on the first frame ( e.g . a bounding box for each object ) . Several recent works have explored pre-training using self-supervision ( Li et al. , 2019 ; Jabri et al. , 2020 ; Caron et al. , 2021 ) or image classification ( Zhang et al. , 2020 ) for the semi-supervised VOS task . These models rely on having access to segmentation masks in the first frame at evaluation time . We demonstrate that multi-object segmentation and tracking can emerge even when segmentation labels are absent at both training and test time . There is a rich literature on using motion cues to segment objects in the computer vision community . These methods use motion information at test time to , for example , cluster trajectories in order to segment independently moving objects ( Faktor & Irani , 2014 ) or estimate multiple fundamental matrices between two views ( Isack & Boykov , 2012 ) , to name a few . Closest to our work is a contemporary method ( Yang et al. , 2021 ) that trains a Slot Attention model on isolated optical flow data for foreground-background segmentation of a single object , independently for individual video frames and without using visual observations . Our method , on the other hand , supports multi-object environments and only relies on motion information as a training signal but otherwise operates directly on textured visual information , which allows it to segment static scenes at test time where optical flow is unavailable and consistently represent and track multiple objects throughout a video . | This paper learns the object-centric representation for videos by extending the previous static slot attention framework with two new considerations, 1. optical flow for temporal modeling and 2. using simple objects' location cues for better segmentation or tracking. Experiments are conducted on CATER, MOVi and MOVi++. It show SAVi has better performance than its baselines especially when using the objects' cues during training. | SP:7b54b54cdc027a76311af86d0a9ea3e8524eb884 |
Conditional Object-Centric Learning from Video | 1 INTRODUCTION . Humans understand the world in terms of separate objects ( Kahneman et al. , 1992 ; Spelke & Kinzler , 2007 ) , which serve as compositional building blocks that can be processed independently and recombined . Such a compositional model of the world forms the foundation for high-level cognitive abilities such as language , causal reasoning , mathematics , planning , etc . and is crucial for generalizing in predictable and systematic ways . Object-centric representations have the potential to greatly improve sample efficiency , robustness , generalization to new tasks , and interpretability of machine learning algorithms ( Greff et al. , 2020 ) . In this work , we focus on the aspect of modeling motion of objects from video , because of its synergistic relationship with object-centric representations : On the one hand , objects support learning an efficient dynamics model by factorizing the scene into approximately independent parts with only sparse interactions . Conversely , motion provides a powerful cue for which inputs should be grouped together , and is thus an important tool for learning about objects . Unsupervised multi-object representation learning has recently made significant progress both on images ( e.g . Burgess et al. , 2019 ; Greff et al. , 2019 ; Lin et al. , 2020 ; Locatello et al. , 2020 ) and on video ( e.g . Veerapaneni et al. , 2020 ; Weis et al. , 2020 ; Jiang et al. , 2020 ) . By incorporating objectcentric inductive biases , these methods learn to segment and represent objects from the statistical structure of the data alone without the need for supervision . Despite promising results these methods are currently limited by two important problems : Firstly , they are restricted to toy data like moving 2D sprites or very simple 3D scenes and generally fail at more realistic data with complex textures ( Greff et al. , 2019 ; Harley et al. , 2021 ; Karazija et al. , 2021 ) . And secondly , it is not entirely clear how to interface with these models both during training and inference . The notion of an object is ambiguous and task-dependent , and the segmentation learned by these models does not necessarily align with the tasks of interest . The model might , for example , over-segment a desired object into distinct parts , or alternatively fail to segment into the desired parts . Ideally , we would like to be able to provide the model with hints as to the desired level of granularity during training , and flexibly query the model during inference to request that the model detects and tracks a particular object or a part thereof . In this paper we introduce a sequential extension of Slot Attention ( Locatello et al. , 2020 ) that we call Slot Attention for Video ( SAVi ) to tackle the problem of unsupervised / weakly-supervised multiobject segmentation and tracking in video data . We demonstrate that 1 ) using optical flow prediction as a self-supervised objective and 2 ) providing a small set of abstract hints such as the center of mass position for objects as conditional inputs in the first frame suffices to direct the decomposition process in complex video scenes without otherwise requiring any priors on the size of objects or on the information content of their representations . We show successful segmentation and tracking for synthetic video data with significantly higher realism and visual complexity than the datasets used in prior work on unsupervised object representation learning . These results are robust with respect to noise on both the optical flow signal and the conditioning , and that they generalize almost perfectly to longer sequences , novel objects , and novel backgrounds . 2 SLOT ATTENTION FOR VIDEO ( SAVI ) . We introduce a sequential extension of the Slot Attention ( Locatello et al. , 2020 ) architecture to video data , which we call SAVi . Inspired by predictor-corrector methods for integration of ordinary differential equations , SAVi performs two steps for each observed video frame : a prediction and a correction step . The correction step uses Slot Attention to update ( or correct ) the set of slot representations based on slot-normalized cross-attention with the inputs . The prediction step uses self-attention among the slots to allow for modeling of temporal dynamics and object interactions . The output of the predictor is then used to initialize the corrector at the next time step , thus allowing the model to consistently track objects over time . Importantly , both of these steps are permutation equivariant and thus are able to preserve slot symmetry . See Figure 1 for a schematic overview of the SAVi architecture . Encoder For each time-step t ∈ { 1 . . . T } the corresponding video frame xt is first passed through a small convolutional neural network ( CNN ) encoder ( here , a stack of five convolutional layers with ReLUs ) , where we concatenate a linear positional encoding at the second-to-last layer . The resulting grid of visual features is flattened into a set of vectors ht = fenc ( xt ) ∈ RN×Denc , where N is the size of the flattened grid ( i.e. , width * height ) and Denc is the dimensionality of the CNN feature maps . Afterwards , each vector is independently passed through a multi-layer perceptron ( MLP ) . Slot Initialization SAVi maintains K slots , each of which can represent a part of the input such as objects or parts thereof . We denote the set of slot representations at time t as St = [ s1t , . . . , sKt ] ∈ RK×D , where we use the calligraphic font to indicate that any operation on these sets is equivariant ( or invariant ) w.r.t . permutation of their elements . In other words , the ordering of the slots carries no information and they can be freely permuted ( in a consistent manner across all time steps ) without changing the model output . We consider two types of initializers : conditional and unconditional . In the conditional case , we encode the conditional input either via a simple MLP ( in the case of bounding boxes or center of mass coordinates ) or via a CNN ( in the case of segmentation masks ) . For slots for which there is no conditioning information available ( e.g . if K is larger than the number of objects ) , we set the conditional input to a fixed value ( e.g. , ‘ −1 ’ for bounding box coordinates ) . The encoded conditional input forms the initial slot representation for SAVi . In the unconditional case , we either randomly initialize slots by sampling from a Gaussian distribution independently for each video ( both at training and at test time ) , or by learning a set of initial slot vectors . Corrector The task of the corrector is to update the slot representations based on the visual features from the encoder . In SAVi this is done using the iterative attention mechanism introduced in Slot Attention ( Locatello et al. , 2020 ) . Different from a regular cross-attention mechanism ( e.g . Vaswani et al. , 2017 ) which is normalized over the inputs , Slot Attention encourages decomposition of the input into multiple slots via softmax-normalization over the output ( i.e . the slots ) , which makes it an appealing choice for our video decomposition architecture . When using a single iteration of Slot Attention , the corrector update takes the following form : Ut = 1 Zt N∑ n=1 At , n v ( ht , n ) ∈ RK×D , At = softmax K ( 1√ D k ( ht ) · q ( St ) T ) ∈ RN×K , ( 1 ) where Zt = ∑N n=1At , n and denotes the Hadamard product . k , q , v are learned linear projections that map to a common dimension D. We apply LayerNorm ( Ba et al. , 2016 ) before each projection . The slot representations are then individually updated using Gated Recurrent Units ( Cho et al. , 2014 ) as ŝkt = GRU ( u k t , s k t ) . Alternatively , the attention step ( followed by the GRU update ) can be iterated multiple times with shared parameters per frame of the input video . For added expressiveness , we apply an MLP with residual connection ŝkt ← ŝkt +MLP ( LN ( ŝkt ) ) and LayerNorm ( LN ) after the GRU when using multiple Slot Attention iterations , following Locatello et al . ( 2020 ) . Predictor The predictor takes the role of a transition function to model temporal dynamics , including interactions between slots . To preserve permutation equivariance , we use a Transformer encoder ( Vaswani et al. , 2017 ) . It allows for modeling of independent object dynamics as well as information exchange between slots via self-attention , while being more memory efficient than GNN-based models such as the Interaction Network ( Battaglia et al. , 2016 ) . Slots are updated as follows : St+1 = LN ( MLP ( S̃t ) + S̃t ) , S̃t = LN ( MultiHeadSelfAttn ( Ŝt ) + Ŝt ) . ( 2 ) For MultiHeadSelfAttn we use the default multi-head dot-product attention mechanism from Vaswani et al . ( 2017 ) . We apply LayerNorm ( LN ) after each residual connection . Decoder The network output should be permutation equivariant ( for per-slot outputs ) or invariant ( for global outputs ) with respect to the slots . Slots can be read out either after application of the corrector or after the predictor ( transition model ) . We decode slot representations after application of the corrector using a slot-wise Spatial Broadcast Decoder ( Watters et al. , 2019 ) to produce per-slot RGB predictions of the optical flow ( or reconstructed frame ) and an alpha mask . The alpha mask is normalized across slots via a softmax and used to perform a weighted sum over the slot-wise RGB reconstruction to arrive at a combined reconstructed frame : yt = K∑ k=1 mkt ykt , mt = softmax K ( m̂kt ) , m̂kt , y k t = fdec ( ŝkt ) . ( 3 ) Training Our sole prediction target is optical flow for each individual video frame , which we represent as RGB images using the default conversion in the literature ( Sun et al. , 2018 ) . Alternatively , our framework also supports prediction of other image-shaped targets , such as reconstruction of the original input frame . We minimize the pixel-wise squared reconstruction error ( averaged over the batch ) , summed over both the temporal and spatial dimensions : Lrec = T∑ t=1 ‖yt − ytruet ‖2 . ( 4 ) 3 RELATED WORK . Object-centric representation learning There is a rich literature on learning object representations from static scenes ( Greff et al. , 2016 ; Eslami et al. , 2016 ; Greff et al. , 2017 ; 2019 ; Burgess et al. , 2019 ; Engelcke et al. , 2020 ; Crawford & Pineau , 2019 ; Lin et al. , 2020 ; Locatello et al. , 2020 ) or videos ( van Steenkiste et al. , 2018 ; Kosiorek et al. , 2018 ; Stelzner et al. , 2019 ; Kipf et al. , 2020 ; Crawford & Pineau , 2020 ; Creswell et al. , 2021 ) without explicit supervision . PSGNet ( Bear et al. , 2020 ) learns to decompose static images or individual frames from a video into hierarchical scene graphs using motion information estimated from neighboring video frames . Most closely related to our work are sequential object-centric models for videos and dynamic environments , such as OP3 ( Veerapaneni et al. , 2020 ) , R-SQAIR ( Stanić & Schmidhuber , 2019 ) , ViMON ( Weis et al. , 2020 ) , and SCALOR ( Jiang et al. , 2020 ) , which learn an internal motion model for each object . SIMONe ( Kabra et al. , 2021 ) auto-encodes an entire video in parallel and learns temporally-abstracted representations of objects . OP3 ( Veerapaneni et al. , 2020 ) uses the same decoder as SAVi and a related dynamics model , but a less efficient inference process compared to Slot Attention . In an attempt to bridge the gap to visually richer and more realistic environments , recent works in object-centric representation learning have explored integration of inductive biases related to 3D scene geometry , both for static scenes ( Chen et al. , 2020 ; Stelzner et al. , 2021 ) and for videos ( Du et al. , 2021 ; Henderson & Lampert , 2020 ; Harley et al. , 2021 ) . This is largely orthogonal to our approach of utilizing conditioning and optical flow . A recent related method , FlowCaps ( Sabour et al. , 2020 ) , similarly proposed to use optical flow in a multi-object model . FlowCaps uses capsules ( Sabour et al. , 2017 ; Hinton et al. , 2018 ) instead of a slot-based representation and assumes specialization of individual capsules to objects or parts of a certain appearance , making it unsuitable for environments that contain a large variety of object types . Using a slot-based , exchangeable representation of objects allows SAVi to represent a diverse range of objects and generalize to novel objects at test time . We discuss further recent related works on attention-based modular architectures ( Santoro et al. , 2018 ; Goyal et al. , 2021a ; b ; c ) , object-centric models for dynamic visual reasoning ( Yi et al. , 2020 ; Ding et al. , 2021a ; Bar et al. , 2021 ) , and supervised attention-based object-centric models ( Fuchs et al. , 2019 ; Carion et al. , 2020 ; Kamath et al. , 2021 ; Meinhardt et al. , 2021 ) in Appendix A.1 . Video object segmentation and tracking The conditional tasks we consider in our work are closely related to the computer vision task of semi-supervised video object segmentation ( VOS ) , where segmentation masks are provided for the first video frame during evaluation . Different from the typical setting , which is addressed by supervised learning on fully annotated videos or related datasets ( e.g . Caelles et al. , 2017 ; Luiten et al. , 2018 ) , we consider the problem where models do not have access to any supervised information beyond the conditioning information on the first frame ( e.g . a bounding box for each object ) . Several recent works have explored pre-training using self-supervision ( Li et al. , 2019 ; Jabri et al. , 2020 ; Caron et al. , 2021 ) or image classification ( Zhang et al. , 2020 ) for the semi-supervised VOS task . These models rely on having access to segmentation masks in the first frame at evaluation time . We demonstrate that multi-object segmentation and tracking can emerge even when segmentation labels are absent at both training and test time . There is a rich literature on using motion cues to segment objects in the computer vision community . These methods use motion information at test time to , for example , cluster trajectories in order to segment independently moving objects ( Faktor & Irani , 2014 ) or estimate multiple fundamental matrices between two views ( Isack & Boykov , 2012 ) , to name a few . Closest to our work is a contemporary method ( Yang et al. , 2021 ) that trains a Slot Attention model on isolated optical flow data for foreground-background segmentation of a single object , independently for individual video frames and without using visual observations . Our method , on the other hand , supports multi-object environments and only relies on motion information as a training signal but otherwise operates directly on textured visual information , which allows it to segment static scenes at test time where optical flow is unavailable and consistently represent and track multiple objects throughout a video . | The paper proposes an object-centric method for video data, inspired from the slot-attention architecture. To enhance the learning (and to allow the users to specify the kind of entities that they are interested in), the slots are initialized from the first frame either a) random; b) with learnable initialisation; c) as the center of the bounding boxes; d) as bounding boxes or e) as segmentation maps. The method uses a slot-attention mechanism to correct the slots from each frame, with query consisting in the current prediction of the slots, and keys and values consisting in some features extracted from the frame + GRU on each node, for temporal aggregation. Then they apply a self-attention layer to propagate the information between slots. Finally, the model is optimise to predict either the optical flow or the rgb content. The paper presents several ablations: using different types of conditioning, different levels of supervision (with or without optical flow or using estimated unsup optical flow instead of the real one); or by varying the amount of noise they inject in the initial positions of the slots (in the conditioned case). The method is tested on CATER, MOVi and MOVi++ to quantify the quality of video decomposition and they also experiment with different types of generalization scenarios: various number of frames, new objects, new background, etc. | SP:7b54b54cdc027a76311af86d0a9ea3e8524eb884 |
Conditional Object-Centric Learning from Video | 1 INTRODUCTION . Humans understand the world in terms of separate objects ( Kahneman et al. , 1992 ; Spelke & Kinzler , 2007 ) , which serve as compositional building blocks that can be processed independently and recombined . Such a compositional model of the world forms the foundation for high-level cognitive abilities such as language , causal reasoning , mathematics , planning , etc . and is crucial for generalizing in predictable and systematic ways . Object-centric representations have the potential to greatly improve sample efficiency , robustness , generalization to new tasks , and interpretability of machine learning algorithms ( Greff et al. , 2020 ) . In this work , we focus on the aspect of modeling motion of objects from video , because of its synergistic relationship with object-centric representations : On the one hand , objects support learning an efficient dynamics model by factorizing the scene into approximately independent parts with only sparse interactions . Conversely , motion provides a powerful cue for which inputs should be grouped together , and is thus an important tool for learning about objects . Unsupervised multi-object representation learning has recently made significant progress both on images ( e.g . Burgess et al. , 2019 ; Greff et al. , 2019 ; Lin et al. , 2020 ; Locatello et al. , 2020 ) and on video ( e.g . Veerapaneni et al. , 2020 ; Weis et al. , 2020 ; Jiang et al. , 2020 ) . By incorporating objectcentric inductive biases , these methods learn to segment and represent objects from the statistical structure of the data alone without the need for supervision . Despite promising results these methods are currently limited by two important problems : Firstly , they are restricted to toy data like moving 2D sprites or very simple 3D scenes and generally fail at more realistic data with complex textures ( Greff et al. , 2019 ; Harley et al. , 2021 ; Karazija et al. , 2021 ) . And secondly , it is not entirely clear how to interface with these models both during training and inference . The notion of an object is ambiguous and task-dependent , and the segmentation learned by these models does not necessarily align with the tasks of interest . The model might , for example , over-segment a desired object into distinct parts , or alternatively fail to segment into the desired parts . Ideally , we would like to be able to provide the model with hints as to the desired level of granularity during training , and flexibly query the model during inference to request that the model detects and tracks a particular object or a part thereof . In this paper we introduce a sequential extension of Slot Attention ( Locatello et al. , 2020 ) that we call Slot Attention for Video ( SAVi ) to tackle the problem of unsupervised / weakly-supervised multiobject segmentation and tracking in video data . We demonstrate that 1 ) using optical flow prediction as a self-supervised objective and 2 ) providing a small set of abstract hints such as the center of mass position for objects as conditional inputs in the first frame suffices to direct the decomposition process in complex video scenes without otherwise requiring any priors on the size of objects or on the information content of their representations . We show successful segmentation and tracking for synthetic video data with significantly higher realism and visual complexity than the datasets used in prior work on unsupervised object representation learning . These results are robust with respect to noise on both the optical flow signal and the conditioning , and that they generalize almost perfectly to longer sequences , novel objects , and novel backgrounds . 2 SLOT ATTENTION FOR VIDEO ( SAVI ) . We introduce a sequential extension of the Slot Attention ( Locatello et al. , 2020 ) architecture to video data , which we call SAVi . Inspired by predictor-corrector methods for integration of ordinary differential equations , SAVi performs two steps for each observed video frame : a prediction and a correction step . The correction step uses Slot Attention to update ( or correct ) the set of slot representations based on slot-normalized cross-attention with the inputs . The prediction step uses self-attention among the slots to allow for modeling of temporal dynamics and object interactions . The output of the predictor is then used to initialize the corrector at the next time step , thus allowing the model to consistently track objects over time . Importantly , both of these steps are permutation equivariant and thus are able to preserve slot symmetry . See Figure 1 for a schematic overview of the SAVi architecture . Encoder For each time-step t ∈ { 1 . . . T } the corresponding video frame xt is first passed through a small convolutional neural network ( CNN ) encoder ( here , a stack of five convolutional layers with ReLUs ) , where we concatenate a linear positional encoding at the second-to-last layer . The resulting grid of visual features is flattened into a set of vectors ht = fenc ( xt ) ∈ RN×Denc , where N is the size of the flattened grid ( i.e. , width * height ) and Denc is the dimensionality of the CNN feature maps . Afterwards , each vector is independently passed through a multi-layer perceptron ( MLP ) . Slot Initialization SAVi maintains K slots , each of which can represent a part of the input such as objects or parts thereof . We denote the set of slot representations at time t as St = [ s1t , . . . , sKt ] ∈ RK×D , where we use the calligraphic font to indicate that any operation on these sets is equivariant ( or invariant ) w.r.t . permutation of their elements . In other words , the ordering of the slots carries no information and they can be freely permuted ( in a consistent manner across all time steps ) without changing the model output . We consider two types of initializers : conditional and unconditional . In the conditional case , we encode the conditional input either via a simple MLP ( in the case of bounding boxes or center of mass coordinates ) or via a CNN ( in the case of segmentation masks ) . For slots for which there is no conditioning information available ( e.g . if K is larger than the number of objects ) , we set the conditional input to a fixed value ( e.g. , ‘ −1 ’ for bounding box coordinates ) . The encoded conditional input forms the initial slot representation for SAVi . In the unconditional case , we either randomly initialize slots by sampling from a Gaussian distribution independently for each video ( both at training and at test time ) , or by learning a set of initial slot vectors . Corrector The task of the corrector is to update the slot representations based on the visual features from the encoder . In SAVi this is done using the iterative attention mechanism introduced in Slot Attention ( Locatello et al. , 2020 ) . Different from a regular cross-attention mechanism ( e.g . Vaswani et al. , 2017 ) which is normalized over the inputs , Slot Attention encourages decomposition of the input into multiple slots via softmax-normalization over the output ( i.e . the slots ) , which makes it an appealing choice for our video decomposition architecture . When using a single iteration of Slot Attention , the corrector update takes the following form : Ut = 1 Zt N∑ n=1 At , n v ( ht , n ) ∈ RK×D , At = softmax K ( 1√ D k ( ht ) · q ( St ) T ) ∈ RN×K , ( 1 ) where Zt = ∑N n=1At , n and denotes the Hadamard product . k , q , v are learned linear projections that map to a common dimension D. We apply LayerNorm ( Ba et al. , 2016 ) before each projection . The slot representations are then individually updated using Gated Recurrent Units ( Cho et al. , 2014 ) as ŝkt = GRU ( u k t , s k t ) . Alternatively , the attention step ( followed by the GRU update ) can be iterated multiple times with shared parameters per frame of the input video . For added expressiveness , we apply an MLP with residual connection ŝkt ← ŝkt +MLP ( LN ( ŝkt ) ) and LayerNorm ( LN ) after the GRU when using multiple Slot Attention iterations , following Locatello et al . ( 2020 ) . Predictor The predictor takes the role of a transition function to model temporal dynamics , including interactions between slots . To preserve permutation equivariance , we use a Transformer encoder ( Vaswani et al. , 2017 ) . It allows for modeling of independent object dynamics as well as information exchange between slots via self-attention , while being more memory efficient than GNN-based models such as the Interaction Network ( Battaglia et al. , 2016 ) . Slots are updated as follows : St+1 = LN ( MLP ( S̃t ) + S̃t ) , S̃t = LN ( MultiHeadSelfAttn ( Ŝt ) + Ŝt ) . ( 2 ) For MultiHeadSelfAttn we use the default multi-head dot-product attention mechanism from Vaswani et al . ( 2017 ) . We apply LayerNorm ( LN ) after each residual connection . Decoder The network output should be permutation equivariant ( for per-slot outputs ) or invariant ( for global outputs ) with respect to the slots . Slots can be read out either after application of the corrector or after the predictor ( transition model ) . We decode slot representations after application of the corrector using a slot-wise Spatial Broadcast Decoder ( Watters et al. , 2019 ) to produce per-slot RGB predictions of the optical flow ( or reconstructed frame ) and an alpha mask . The alpha mask is normalized across slots via a softmax and used to perform a weighted sum over the slot-wise RGB reconstruction to arrive at a combined reconstructed frame : yt = K∑ k=1 mkt ykt , mt = softmax K ( m̂kt ) , m̂kt , y k t = fdec ( ŝkt ) . ( 3 ) Training Our sole prediction target is optical flow for each individual video frame , which we represent as RGB images using the default conversion in the literature ( Sun et al. , 2018 ) . Alternatively , our framework also supports prediction of other image-shaped targets , such as reconstruction of the original input frame . We minimize the pixel-wise squared reconstruction error ( averaged over the batch ) , summed over both the temporal and spatial dimensions : Lrec = T∑ t=1 ‖yt − ytruet ‖2 . ( 4 ) 3 RELATED WORK . Object-centric representation learning There is a rich literature on learning object representations from static scenes ( Greff et al. , 2016 ; Eslami et al. , 2016 ; Greff et al. , 2017 ; 2019 ; Burgess et al. , 2019 ; Engelcke et al. , 2020 ; Crawford & Pineau , 2019 ; Lin et al. , 2020 ; Locatello et al. , 2020 ) or videos ( van Steenkiste et al. , 2018 ; Kosiorek et al. , 2018 ; Stelzner et al. , 2019 ; Kipf et al. , 2020 ; Crawford & Pineau , 2020 ; Creswell et al. , 2021 ) without explicit supervision . PSGNet ( Bear et al. , 2020 ) learns to decompose static images or individual frames from a video into hierarchical scene graphs using motion information estimated from neighboring video frames . Most closely related to our work are sequential object-centric models for videos and dynamic environments , such as OP3 ( Veerapaneni et al. , 2020 ) , R-SQAIR ( Stanić & Schmidhuber , 2019 ) , ViMON ( Weis et al. , 2020 ) , and SCALOR ( Jiang et al. , 2020 ) , which learn an internal motion model for each object . SIMONe ( Kabra et al. , 2021 ) auto-encodes an entire video in parallel and learns temporally-abstracted representations of objects . OP3 ( Veerapaneni et al. , 2020 ) uses the same decoder as SAVi and a related dynamics model , but a less efficient inference process compared to Slot Attention . In an attempt to bridge the gap to visually richer and more realistic environments , recent works in object-centric representation learning have explored integration of inductive biases related to 3D scene geometry , both for static scenes ( Chen et al. , 2020 ; Stelzner et al. , 2021 ) and for videos ( Du et al. , 2021 ; Henderson & Lampert , 2020 ; Harley et al. , 2021 ) . This is largely orthogonal to our approach of utilizing conditioning and optical flow . A recent related method , FlowCaps ( Sabour et al. , 2020 ) , similarly proposed to use optical flow in a multi-object model . FlowCaps uses capsules ( Sabour et al. , 2017 ; Hinton et al. , 2018 ) instead of a slot-based representation and assumes specialization of individual capsules to objects or parts of a certain appearance , making it unsuitable for environments that contain a large variety of object types . Using a slot-based , exchangeable representation of objects allows SAVi to represent a diverse range of objects and generalize to novel objects at test time . We discuss further recent related works on attention-based modular architectures ( Santoro et al. , 2018 ; Goyal et al. , 2021a ; b ; c ) , object-centric models for dynamic visual reasoning ( Yi et al. , 2020 ; Ding et al. , 2021a ; Bar et al. , 2021 ) , and supervised attention-based object-centric models ( Fuchs et al. , 2019 ; Carion et al. , 2020 ; Kamath et al. , 2021 ; Meinhardt et al. , 2021 ) in Appendix A.1 . Video object segmentation and tracking The conditional tasks we consider in our work are closely related to the computer vision task of semi-supervised video object segmentation ( VOS ) , where segmentation masks are provided for the first video frame during evaluation . Different from the typical setting , which is addressed by supervised learning on fully annotated videos or related datasets ( e.g . Caelles et al. , 2017 ; Luiten et al. , 2018 ) , we consider the problem where models do not have access to any supervised information beyond the conditioning information on the first frame ( e.g . a bounding box for each object ) . Several recent works have explored pre-training using self-supervision ( Li et al. , 2019 ; Jabri et al. , 2020 ; Caron et al. , 2021 ) or image classification ( Zhang et al. , 2020 ) for the semi-supervised VOS task . These models rely on having access to segmentation masks in the first frame at evaluation time . We demonstrate that multi-object segmentation and tracking can emerge even when segmentation labels are absent at both training and test time . There is a rich literature on using motion cues to segment objects in the computer vision community . These methods use motion information at test time to , for example , cluster trajectories in order to segment independently moving objects ( Faktor & Irani , 2014 ) or estimate multiple fundamental matrices between two views ( Isack & Boykov , 2012 ) , to name a few . Closest to our work is a contemporary method ( Yang et al. , 2021 ) that trains a Slot Attention model on isolated optical flow data for foreground-background segmentation of a single object , independently for individual video frames and without using visual observations . Our method , on the other hand , supports multi-object environments and only relies on motion information as a training signal but otherwise operates directly on textured visual information , which allows it to segment static scenes at test time where optical flow is unavailable and consistently represent and track multiple objects throughout a video . | This paper considers the problem of learning object-centric representations from videos. Different from existing methods that mostly model this problem under a pure unsupervised setting by reconstructing video frames, this work introduces two improvements. The first is offering weak "hints" (can be seen as additional inputs) during training, which can avoid falling into a trivial solution like previous unsupervised approaches always did. The "hints" here can be in form of pixel-wise masks, bounding boxes, or even as simple as centers of mass. The second improvement regards the supervision signals. Instead of reconstructing raw pixels, they propose to predict optical flows. In easy scenes the optical flows are actually similar to pixel-wise masks: values inside an object are prone to be consistent. Therefore the learning is eased, and then making it possible to train on more realistic datasets. Extensive experiments are conducted to validate the effectiveness of these two improvements. | SP:7b54b54cdc027a76311af86d0a9ea3e8524eb884 |
Guided-TTS:Text-to-Speech with Untranscribed Speech | 1 INTRODUCTION . Neural text-to-speech ( TTS ) models have been achieved to generate high-quality human-like speech given text ( van den Oord et al . ( 2016 ) ; Shen et al . ( 2018 ) ) . In general , these TTS models are conditional generative models that encode text into the hidden representation and generate speech from the encoded representation . Early TTS models are autoregressive generative models which generate high-quality speech but suffer from slow synthesis speed due to the sequential sampling procedure ( Shen et al . ( 2018 ) ; Li et al . ( 2019 ) ) . Owing to the development of non-autoregressive generative models , recent TTS models are capable of generating high-quality speech with faster inference speed ( Ren et al . ( 2019 ) ; Ren et al . ( 2021 ) ; Kim et al . ( 2020 ) ; Popov et al . ( 2021 ) ) . Recently , high-quality end-to-end TTS models have been proposed that generate raw waveform from the text at once ( Kim et al . ( 2021 ) ; Weiss et al . ( 2021 ) ; Chen et al . ( 2021b ) ) . Despite the high-quality and fast inference speed of speech synthesis , most TTS models can be trained only if the transcribed data of the desired speaker is given . While long-form untranscribed data , such as audiobooks or podcasts , is available on various websites , it is challenging to use these unpaired speech data to train existing TTS models . To utilize these untranscribed data , long-form untranscribed speech data has to be segmented into sentence-level , and then each segmented speech should be transcribed accurately . Since the existing TTS models must directly model the conditional distribution of speech given text , the direct usage of untranscribed data remains challenging to solve . In this work , we propose Guided-TTS , an unconditional diffusion-based generative model trained on untranscribed data that leverages a phoneme classifier for text-to-speech synthesis . Trained on untranscribed speech data for the desired speaker , our unconditional diffusion probabilistic model learns to generate mel-spectrograms of the speaker without any context . As training data does not have to be aligned with text sequence for unconditional speech modeling , we simply use random chunks of untranscribed speech to train our unconditional generative model . This allows us to build training data without extra effort in modeling the speech of speakers for which only long-form untranscribed speech data is available . To guide the unconditional DDPM for TTS , we train a framewise phoneme classifier on transcribed data and use the gradients of the classifier during sampling . Although our unconditional generative model is trained without any transcript , Guided-TTS effectively generates mel-spectrograms given the transcript by guiding the generative process of unconditional DDPM with the phoneme classifier . We demonstrate that the proposed method , TTS by guiding the unconditional DDPM , matches the performance of the existing conditional TTS models on LJSpeech . We further show that by training the phoneme classifier on multi-speaker paired dataset , Guided-TTS also shows comparable performance without seeing any transcript of LJSpeech , which shows the possibility to build a high-quality text-to-speech model without a transcript for the desired speaker . We encourage the readers to listen to samples of Guided-TTS trained on various untranscribed datasets on our demo page . 1 2 BACKGROUND . 2.1 DENOISING DIFFUSION PROBABLISTIC MODELS ( DDPM ) AND ITS VARIANT . DDPM ( Ho et al . ( 2020 ) ) , which is recently proposed as a kind of probabilistic generative model , has been applied to various domains such as image ( Dhariwal & Nichol ( 2021 ) ) and audio ( Chen et al . ( 2021a ) ; Popov et al . ( 2021 ) ) . DDPM first defines a forward process that gradually corrupts dataX0 to a random noiseXT across T timesteps . The model learns the reverse process that follows the reverse trajectory of the predefined forward process to generate data from random noise . Recently , there have been approaches to formulate the trajectory between data and noise as a continuous stochastic differential equation ( SDE ) instead of a discrete-time Markov process ( Song et al . ( 2021b ) ) . Grad-TTS ( Popov et al . ( 2021 ) ) introduces SDE formulation to TTS , which we have followed and used . According to the formulation of Grad-TTS , the forward process that corrupts data X0 into the standard Gaussian noise XT is as follows : dXt = − 1 2 Xtβtdt+ √ βtdWt , ( 1 ) where βt is a predefined noise schedule , βt = β0+ ( βT−β0 ) t , andWt is a Wiener process . Anderson ( 1982 ) showed that the reverse process , which represents the trajectory from noise XT to X0 , can be formulated in SDE , which is defined as follows : dXt = ( − 1 2 Xt −∇Xt log pt ( Xt ) ) βtdt+ √ βtdW̃t , ( 2 ) where W̃t is a reverse time Wiener process . Given the score , the gradient of log density with respect to data ( i.e. , ∇Xt log pt ( Xt ) ) , for t ∈ [ 0 , T ] , we can sample data X0 from random noise XT by solving Eq . ( 2 ) . To generate data , the DDPM learns to estimate the score with the neural network sθ parameterized by θ . To estimate the score , Xt is sampled from the distribution derived from Eq . ( 1 ) given dataX0 , which is as follows : Xt|X0 ∼ N ( ρ ( X0 , t ) , λ ( t ) ) , ( 3 ) where ρ ( X0 , t ) = e− 1 2 ∫ t 0 βsdsX0 , and λ ( t ) = I−e− ∫ t 0 βsds . Then , the score can be derived from Eq . ( 3 ) ; ∇Xt log pt ( Xt|X0 ) = −λ ( t ) −1 t , given X0 ( Popov et al . ( 2021 ) ) . To train the model sθ ( Xt , t ) for ∀t ∈ [ 0 , T ] , the following loss is used : L ( θ ) = EtEX0E t [ ∥∥sθ ( Xt , t ) + λ ( t ) −1 t∥∥22 ] ( 4 ) which is L2 loss as in previous works ( Ho et al . ( 2020 ) , Song et al . ( 2021b ) ) . Using the model sθ ( Xt , t ) , we can generate sample X0 from noise by solving Eq . ( 2 ) . Grad-TTS generates data X0 from XT by setting T = 1 and using a fixed discretization strategy ( Song et al . ( 2021b ) ) : Xt− 1N = Xt + βt N ( 1 2 Xt +∇Xt log pt ( Xt ) ) + √ βt N zt ( 5 ) whereN is the number of steps to solve SDE , t ∈ { 1N , 2 N , ... , 1 } and zt is a standard Gaussian noise . 1Demo : https : //bit.ly/3oWhVJg 2.2 CLASSIFIER GUIDANCE . DDPM can be guided to generate samples with the desired condition without fine-tuning with the introduction of a classifier . Song et al . ( 2021b ) used the unconditional DDPM to generate classconditional images using a separately trained image classifier . Not only unconditional DDPM but also conditional DDPM can be guided using the classifier , which contributes to achieving state-ofthe-art performance for class-conditional image generation ( Dhariwal & Nichol ( 2021 ) ) . For conditional generation , the classifier pt ( y|Xt ) is trained to classify the noisy data Xt as the condition y . If Eq . ( 5 ) is modified , discretized SDE for the conditional generation can be obtained . Xt− 1N = Xt + βt N ( 1 2 Xt +∇Xt log pt ( Xt|y ) ) + √ βt N zt ( 6 ) ∇Xt log pt ( Xt|y ) = ∇Xt log pt ( Xt ) +∇Xt log pt ( y|Xt ) ( 7 ) If the unconditional score and the classifier for the condition are given , the sample X0 with the condition y can be generated using Eq . ( 6 ) . 3 GUIDED-TTS . In this section , we present Guided-TTS , which aims to build a high-quality text-to-speech model without the transcript of the target speaker . While other TTS models directly learn to generate speech from text , Guided-TTS models the unconditional distribution of speech to utilize speech-only data and guides the unconditional model to generate speech with a given text . To the best of our knowledge , Guided-TTS is the first TTS model that generates speech with the unconditional generative model . Both unconditional speech modeling and controllable generation with the unconditional generative model are well known to be challenging . To tackle these challenges , we adopt a diffusion-based generative model for unconditional speech generation , which has the advantages of modeling complex distributions and easy controllability . Additionally , we introduce a phoneme classifier to guide the unconditional DDPM for TTS , which requires transcribed data for training . In Guided-TTS , the generative model and the phoneme classifier are trained separately so that we can utilize different datasets to train each module . For instance , assume only the untranscribed speech data of the target speaker for TTS is available . With Guided-TTS , the phoneme classifier can still be trained on other transcribed datasets that contain rich , large-scale multi-speaker data . The phoneme classifier trained in such a manner effectively guides the generative model , trained only using the untranscribed speech data of the target speaker . By doing so , with the two modules independent of each other , we can achieve the text-to-speech model without any transcript of the target speaker . Guided-TTS consists of 4 modules : the unconditional DDPM , the phoneme classifier , the duration predictor , and the speaker encoder , as shown in Fig . 1 . Unconditional DDPM is a module that learns to generate mel-spectrogram unconditionally , and the remaining three modules are for TTS synthesis by guidance . We will explain the unconditional DDPM in Section 3.1 , followed by the method of guiding the unconditional model for TTS in Section 3.2 . 3.1 UNCONDITIONAL DDPM . Our unconditional DDPM models the unconditional distribution of speech PX without any transcript . We assume that the training data for the diffusion-based model has tens of hours of untranscribed speech data from the target speaker S for TTS . As our generative model learns only with speech data , training samples do not need to be aligned with the text . Thus , we use random chunks of untranscribed speech as training data to reduce the burden of not only speech transcription but also segmentation when only the long-form speech data is available for the target speaker . Given a mel-spectrogram X = X0 , we define a forward process as in Eq . ( 1 ) , which gradually corrupts data into noise , and approximate the reverse process in Eq . ( 2 ) by estimating the unconditional score ∇Xt log p ( Xt ) for each timestep t. At each iteration , Xt , t ∈ [ 0 , 1 ] is sampled from the mel-spectrogram X0 as in Eq . ( 3 ) , and score is estimated with the neural network sθ ( Xt , t ) parameterized by θ . The training objective of our unconditional model is in Eq . ( 4 ) . Similar to Grad-TTS ( Popov et al . ( 2021 ) ) , we regard mel-spectrogram as a 2D image with a single channel and use the U-Net architecture ( Ronneberger et al . ( 2015 ) ) as sθ . We use the same size of the architecture used to model 32× 32 sized images in Ho et al . ( 2020 ) to capture long-term dependencies without any text information , while Grad-TTS uses smaller architecture for the conditional distribution modeling . | The authors propose a denoising diffusion probabilistic model (DDPM) to learn to produce natural spectrograms from noise without a condition. This enables them to train a generative model on unlabelled speech data. They show the effectiveness of their approach by inpainting masked out parts of a spectrogram and by showing off audio samples of unconditioned spectrogram babble, that has been vocoded to a waveform. They further propose a phoneme classification module that serves as conditioning signal for the DDPM during sampling in order to generate spectrograms that match a given phoneme sequence, turning the unconditional DDPM into a text-to-speech model, which the authors call Guided-TTS. This allows for all components of the model to be trained individually on different datasets, alleviating the need for large labelled datasets for TTS. | SP:9d7f87de4b36b5207ee016169a157694e05094e9 |
Guided-TTS:Text-to-Speech with Untranscribed Speech | 1 INTRODUCTION . Neural text-to-speech ( TTS ) models have been achieved to generate high-quality human-like speech given text ( van den Oord et al . ( 2016 ) ; Shen et al . ( 2018 ) ) . In general , these TTS models are conditional generative models that encode text into the hidden representation and generate speech from the encoded representation . Early TTS models are autoregressive generative models which generate high-quality speech but suffer from slow synthesis speed due to the sequential sampling procedure ( Shen et al . ( 2018 ) ; Li et al . ( 2019 ) ) . Owing to the development of non-autoregressive generative models , recent TTS models are capable of generating high-quality speech with faster inference speed ( Ren et al . ( 2019 ) ; Ren et al . ( 2021 ) ; Kim et al . ( 2020 ) ; Popov et al . ( 2021 ) ) . Recently , high-quality end-to-end TTS models have been proposed that generate raw waveform from the text at once ( Kim et al . ( 2021 ) ; Weiss et al . ( 2021 ) ; Chen et al . ( 2021b ) ) . Despite the high-quality and fast inference speed of speech synthesis , most TTS models can be trained only if the transcribed data of the desired speaker is given . While long-form untranscribed data , such as audiobooks or podcasts , is available on various websites , it is challenging to use these unpaired speech data to train existing TTS models . To utilize these untranscribed data , long-form untranscribed speech data has to be segmented into sentence-level , and then each segmented speech should be transcribed accurately . Since the existing TTS models must directly model the conditional distribution of speech given text , the direct usage of untranscribed data remains challenging to solve . In this work , we propose Guided-TTS , an unconditional diffusion-based generative model trained on untranscribed data that leverages a phoneme classifier for text-to-speech synthesis . Trained on untranscribed speech data for the desired speaker , our unconditional diffusion probabilistic model learns to generate mel-spectrograms of the speaker without any context . As training data does not have to be aligned with text sequence for unconditional speech modeling , we simply use random chunks of untranscribed speech to train our unconditional generative model . This allows us to build training data without extra effort in modeling the speech of speakers for which only long-form untranscribed speech data is available . To guide the unconditional DDPM for TTS , we train a framewise phoneme classifier on transcribed data and use the gradients of the classifier during sampling . Although our unconditional generative model is trained without any transcript , Guided-TTS effectively generates mel-spectrograms given the transcript by guiding the generative process of unconditional DDPM with the phoneme classifier . We demonstrate that the proposed method , TTS by guiding the unconditional DDPM , matches the performance of the existing conditional TTS models on LJSpeech . We further show that by training the phoneme classifier on multi-speaker paired dataset , Guided-TTS also shows comparable performance without seeing any transcript of LJSpeech , which shows the possibility to build a high-quality text-to-speech model without a transcript for the desired speaker . We encourage the readers to listen to samples of Guided-TTS trained on various untranscribed datasets on our demo page . 1 2 BACKGROUND . 2.1 DENOISING DIFFUSION PROBABLISTIC MODELS ( DDPM ) AND ITS VARIANT . DDPM ( Ho et al . ( 2020 ) ) , which is recently proposed as a kind of probabilistic generative model , has been applied to various domains such as image ( Dhariwal & Nichol ( 2021 ) ) and audio ( Chen et al . ( 2021a ) ; Popov et al . ( 2021 ) ) . DDPM first defines a forward process that gradually corrupts dataX0 to a random noiseXT across T timesteps . The model learns the reverse process that follows the reverse trajectory of the predefined forward process to generate data from random noise . Recently , there have been approaches to formulate the trajectory between data and noise as a continuous stochastic differential equation ( SDE ) instead of a discrete-time Markov process ( Song et al . ( 2021b ) ) . Grad-TTS ( Popov et al . ( 2021 ) ) introduces SDE formulation to TTS , which we have followed and used . According to the formulation of Grad-TTS , the forward process that corrupts data X0 into the standard Gaussian noise XT is as follows : dXt = − 1 2 Xtβtdt+ √ βtdWt , ( 1 ) where βt is a predefined noise schedule , βt = β0+ ( βT−β0 ) t , andWt is a Wiener process . Anderson ( 1982 ) showed that the reverse process , which represents the trajectory from noise XT to X0 , can be formulated in SDE , which is defined as follows : dXt = ( − 1 2 Xt −∇Xt log pt ( Xt ) ) βtdt+ √ βtdW̃t , ( 2 ) where W̃t is a reverse time Wiener process . Given the score , the gradient of log density with respect to data ( i.e. , ∇Xt log pt ( Xt ) ) , for t ∈ [ 0 , T ] , we can sample data X0 from random noise XT by solving Eq . ( 2 ) . To generate data , the DDPM learns to estimate the score with the neural network sθ parameterized by θ . To estimate the score , Xt is sampled from the distribution derived from Eq . ( 1 ) given dataX0 , which is as follows : Xt|X0 ∼ N ( ρ ( X0 , t ) , λ ( t ) ) , ( 3 ) where ρ ( X0 , t ) = e− 1 2 ∫ t 0 βsdsX0 , and λ ( t ) = I−e− ∫ t 0 βsds . Then , the score can be derived from Eq . ( 3 ) ; ∇Xt log pt ( Xt|X0 ) = −λ ( t ) −1 t , given X0 ( Popov et al . ( 2021 ) ) . To train the model sθ ( Xt , t ) for ∀t ∈ [ 0 , T ] , the following loss is used : L ( θ ) = EtEX0E t [ ∥∥sθ ( Xt , t ) + λ ( t ) −1 t∥∥22 ] ( 4 ) which is L2 loss as in previous works ( Ho et al . ( 2020 ) , Song et al . ( 2021b ) ) . Using the model sθ ( Xt , t ) , we can generate sample X0 from noise by solving Eq . ( 2 ) . Grad-TTS generates data X0 from XT by setting T = 1 and using a fixed discretization strategy ( Song et al . ( 2021b ) ) : Xt− 1N = Xt + βt N ( 1 2 Xt +∇Xt log pt ( Xt ) ) + √ βt N zt ( 5 ) whereN is the number of steps to solve SDE , t ∈ { 1N , 2 N , ... , 1 } and zt is a standard Gaussian noise . 1Demo : https : //bit.ly/3oWhVJg 2.2 CLASSIFIER GUIDANCE . DDPM can be guided to generate samples with the desired condition without fine-tuning with the introduction of a classifier . Song et al . ( 2021b ) used the unconditional DDPM to generate classconditional images using a separately trained image classifier . Not only unconditional DDPM but also conditional DDPM can be guided using the classifier , which contributes to achieving state-ofthe-art performance for class-conditional image generation ( Dhariwal & Nichol ( 2021 ) ) . For conditional generation , the classifier pt ( y|Xt ) is trained to classify the noisy data Xt as the condition y . If Eq . ( 5 ) is modified , discretized SDE for the conditional generation can be obtained . Xt− 1N = Xt + βt N ( 1 2 Xt +∇Xt log pt ( Xt|y ) ) + √ βt N zt ( 6 ) ∇Xt log pt ( Xt|y ) = ∇Xt log pt ( Xt ) +∇Xt log pt ( y|Xt ) ( 7 ) If the unconditional score and the classifier for the condition are given , the sample X0 with the condition y can be generated using Eq . ( 6 ) . 3 GUIDED-TTS . In this section , we present Guided-TTS , which aims to build a high-quality text-to-speech model without the transcript of the target speaker . While other TTS models directly learn to generate speech from text , Guided-TTS models the unconditional distribution of speech to utilize speech-only data and guides the unconditional model to generate speech with a given text . To the best of our knowledge , Guided-TTS is the first TTS model that generates speech with the unconditional generative model . Both unconditional speech modeling and controllable generation with the unconditional generative model are well known to be challenging . To tackle these challenges , we adopt a diffusion-based generative model for unconditional speech generation , which has the advantages of modeling complex distributions and easy controllability . Additionally , we introduce a phoneme classifier to guide the unconditional DDPM for TTS , which requires transcribed data for training . In Guided-TTS , the generative model and the phoneme classifier are trained separately so that we can utilize different datasets to train each module . For instance , assume only the untranscribed speech data of the target speaker for TTS is available . With Guided-TTS , the phoneme classifier can still be trained on other transcribed datasets that contain rich , large-scale multi-speaker data . The phoneme classifier trained in such a manner effectively guides the generative model , trained only using the untranscribed speech data of the target speaker . By doing so , with the two modules independent of each other , we can achieve the text-to-speech model without any transcript of the target speaker . Guided-TTS consists of 4 modules : the unconditional DDPM , the phoneme classifier , the duration predictor , and the speaker encoder , as shown in Fig . 1 . Unconditional DDPM is a module that learns to generate mel-spectrogram unconditionally , and the remaining three modules are for TTS synthesis by guidance . We will explain the unconditional DDPM in Section 3.1 , followed by the method of guiding the unconditional model for TTS in Section 3.2 . 3.1 UNCONDITIONAL DDPM . Our unconditional DDPM models the unconditional distribution of speech PX without any transcript . We assume that the training data for the diffusion-based model has tens of hours of untranscribed speech data from the target speaker S for TTS . As our generative model learns only with speech data , training samples do not need to be aligned with the text . Thus , we use random chunks of untranscribed speech as training data to reduce the burden of not only speech transcription but also segmentation when only the long-form speech data is available for the target speaker . Given a mel-spectrogram X = X0 , we define a forward process as in Eq . ( 1 ) , which gradually corrupts data into noise , and approximate the reverse process in Eq . ( 2 ) by estimating the unconditional score ∇Xt log p ( Xt ) for each timestep t. At each iteration , Xt , t ∈ [ 0 , 1 ] is sampled from the mel-spectrogram X0 as in Eq . ( 3 ) , and score is estimated with the neural network sθ ( Xt , t ) parameterized by θ . The training objective of our unconditional model is in Eq . ( 4 ) . Similar to Grad-TTS ( Popov et al . ( 2021 ) ) , we regard mel-spectrogram as a 2D image with a single channel and use the U-Net architecture ( Ronneberger et al . ( 2015 ) ) as sθ . We use the same size of the architecture used to model 32× 32 sized images in Ho et al . ( 2020 ) to capture long-term dependencies without any text information , while Grad-TTS uses smaller architecture for the conditional distribution modeling . | This paper borrows the backbone from Grad-TTS [1], building an unconditional diffusion based model on speech input. With a phone classifier and duration predictor trained on transcribed data, and a speaker encoder trained on labeled speaker dataset, the model was able to synthesize speech on given text. Experimental results shows it's comparable with Grad-TTS [1] trained on transcribed data (by conditioning on the text during training). Ablation shows it can generalize to a diversified dataset. [1] Vadim Popov, Ivan Vovk, Vladimir Gogoryan, Tasnima Sadekova, and Mikhail Kudinov. Grad-TTS: A Diffusion Probabilistic Model for Text-to-Speech. In Proceedings of the 38th International Conference on Machine Learning, ICML 2021, 18-24 July 2021, Virtual Event, volume 139 of Proceedings of Machine Learning Research, pp. 8599–8608. PMLR, 2021a. | SP:9d7f87de4b36b5207ee016169a157694e05094e9 |
Guided-TTS:Text-to-Speech with Untranscribed Speech | 1 INTRODUCTION . Neural text-to-speech ( TTS ) models have been achieved to generate high-quality human-like speech given text ( van den Oord et al . ( 2016 ) ; Shen et al . ( 2018 ) ) . In general , these TTS models are conditional generative models that encode text into the hidden representation and generate speech from the encoded representation . Early TTS models are autoregressive generative models which generate high-quality speech but suffer from slow synthesis speed due to the sequential sampling procedure ( Shen et al . ( 2018 ) ; Li et al . ( 2019 ) ) . Owing to the development of non-autoregressive generative models , recent TTS models are capable of generating high-quality speech with faster inference speed ( Ren et al . ( 2019 ) ; Ren et al . ( 2021 ) ; Kim et al . ( 2020 ) ; Popov et al . ( 2021 ) ) . Recently , high-quality end-to-end TTS models have been proposed that generate raw waveform from the text at once ( Kim et al . ( 2021 ) ; Weiss et al . ( 2021 ) ; Chen et al . ( 2021b ) ) . Despite the high-quality and fast inference speed of speech synthesis , most TTS models can be trained only if the transcribed data of the desired speaker is given . While long-form untranscribed data , such as audiobooks or podcasts , is available on various websites , it is challenging to use these unpaired speech data to train existing TTS models . To utilize these untranscribed data , long-form untranscribed speech data has to be segmented into sentence-level , and then each segmented speech should be transcribed accurately . Since the existing TTS models must directly model the conditional distribution of speech given text , the direct usage of untranscribed data remains challenging to solve . In this work , we propose Guided-TTS , an unconditional diffusion-based generative model trained on untranscribed data that leverages a phoneme classifier for text-to-speech synthesis . Trained on untranscribed speech data for the desired speaker , our unconditional diffusion probabilistic model learns to generate mel-spectrograms of the speaker without any context . As training data does not have to be aligned with text sequence for unconditional speech modeling , we simply use random chunks of untranscribed speech to train our unconditional generative model . This allows us to build training data without extra effort in modeling the speech of speakers for which only long-form untranscribed speech data is available . To guide the unconditional DDPM for TTS , we train a framewise phoneme classifier on transcribed data and use the gradients of the classifier during sampling . Although our unconditional generative model is trained without any transcript , Guided-TTS effectively generates mel-spectrograms given the transcript by guiding the generative process of unconditional DDPM with the phoneme classifier . We demonstrate that the proposed method , TTS by guiding the unconditional DDPM , matches the performance of the existing conditional TTS models on LJSpeech . We further show that by training the phoneme classifier on multi-speaker paired dataset , Guided-TTS also shows comparable performance without seeing any transcript of LJSpeech , which shows the possibility to build a high-quality text-to-speech model without a transcript for the desired speaker . We encourage the readers to listen to samples of Guided-TTS trained on various untranscribed datasets on our demo page . 1 2 BACKGROUND . 2.1 DENOISING DIFFUSION PROBABLISTIC MODELS ( DDPM ) AND ITS VARIANT . DDPM ( Ho et al . ( 2020 ) ) , which is recently proposed as a kind of probabilistic generative model , has been applied to various domains such as image ( Dhariwal & Nichol ( 2021 ) ) and audio ( Chen et al . ( 2021a ) ; Popov et al . ( 2021 ) ) . DDPM first defines a forward process that gradually corrupts dataX0 to a random noiseXT across T timesteps . The model learns the reverse process that follows the reverse trajectory of the predefined forward process to generate data from random noise . Recently , there have been approaches to formulate the trajectory between data and noise as a continuous stochastic differential equation ( SDE ) instead of a discrete-time Markov process ( Song et al . ( 2021b ) ) . Grad-TTS ( Popov et al . ( 2021 ) ) introduces SDE formulation to TTS , which we have followed and used . According to the formulation of Grad-TTS , the forward process that corrupts data X0 into the standard Gaussian noise XT is as follows : dXt = − 1 2 Xtβtdt+ √ βtdWt , ( 1 ) where βt is a predefined noise schedule , βt = β0+ ( βT−β0 ) t , andWt is a Wiener process . Anderson ( 1982 ) showed that the reverse process , which represents the trajectory from noise XT to X0 , can be formulated in SDE , which is defined as follows : dXt = ( − 1 2 Xt −∇Xt log pt ( Xt ) ) βtdt+ √ βtdW̃t , ( 2 ) where W̃t is a reverse time Wiener process . Given the score , the gradient of log density with respect to data ( i.e. , ∇Xt log pt ( Xt ) ) , for t ∈ [ 0 , T ] , we can sample data X0 from random noise XT by solving Eq . ( 2 ) . To generate data , the DDPM learns to estimate the score with the neural network sθ parameterized by θ . To estimate the score , Xt is sampled from the distribution derived from Eq . ( 1 ) given dataX0 , which is as follows : Xt|X0 ∼ N ( ρ ( X0 , t ) , λ ( t ) ) , ( 3 ) where ρ ( X0 , t ) = e− 1 2 ∫ t 0 βsdsX0 , and λ ( t ) = I−e− ∫ t 0 βsds . Then , the score can be derived from Eq . ( 3 ) ; ∇Xt log pt ( Xt|X0 ) = −λ ( t ) −1 t , given X0 ( Popov et al . ( 2021 ) ) . To train the model sθ ( Xt , t ) for ∀t ∈ [ 0 , T ] , the following loss is used : L ( θ ) = EtEX0E t [ ∥∥sθ ( Xt , t ) + λ ( t ) −1 t∥∥22 ] ( 4 ) which is L2 loss as in previous works ( Ho et al . ( 2020 ) , Song et al . ( 2021b ) ) . Using the model sθ ( Xt , t ) , we can generate sample X0 from noise by solving Eq . ( 2 ) . Grad-TTS generates data X0 from XT by setting T = 1 and using a fixed discretization strategy ( Song et al . ( 2021b ) ) : Xt− 1N = Xt + βt N ( 1 2 Xt +∇Xt log pt ( Xt ) ) + √ βt N zt ( 5 ) whereN is the number of steps to solve SDE , t ∈ { 1N , 2 N , ... , 1 } and zt is a standard Gaussian noise . 1Demo : https : //bit.ly/3oWhVJg 2.2 CLASSIFIER GUIDANCE . DDPM can be guided to generate samples with the desired condition without fine-tuning with the introduction of a classifier . Song et al . ( 2021b ) used the unconditional DDPM to generate classconditional images using a separately trained image classifier . Not only unconditional DDPM but also conditional DDPM can be guided using the classifier , which contributes to achieving state-ofthe-art performance for class-conditional image generation ( Dhariwal & Nichol ( 2021 ) ) . For conditional generation , the classifier pt ( y|Xt ) is trained to classify the noisy data Xt as the condition y . If Eq . ( 5 ) is modified , discretized SDE for the conditional generation can be obtained . Xt− 1N = Xt + βt N ( 1 2 Xt +∇Xt log pt ( Xt|y ) ) + √ βt N zt ( 6 ) ∇Xt log pt ( Xt|y ) = ∇Xt log pt ( Xt ) +∇Xt log pt ( y|Xt ) ( 7 ) If the unconditional score and the classifier for the condition are given , the sample X0 with the condition y can be generated using Eq . ( 6 ) . 3 GUIDED-TTS . In this section , we present Guided-TTS , which aims to build a high-quality text-to-speech model without the transcript of the target speaker . While other TTS models directly learn to generate speech from text , Guided-TTS models the unconditional distribution of speech to utilize speech-only data and guides the unconditional model to generate speech with a given text . To the best of our knowledge , Guided-TTS is the first TTS model that generates speech with the unconditional generative model . Both unconditional speech modeling and controllable generation with the unconditional generative model are well known to be challenging . To tackle these challenges , we adopt a diffusion-based generative model for unconditional speech generation , which has the advantages of modeling complex distributions and easy controllability . Additionally , we introduce a phoneme classifier to guide the unconditional DDPM for TTS , which requires transcribed data for training . In Guided-TTS , the generative model and the phoneme classifier are trained separately so that we can utilize different datasets to train each module . For instance , assume only the untranscribed speech data of the target speaker for TTS is available . With Guided-TTS , the phoneme classifier can still be trained on other transcribed datasets that contain rich , large-scale multi-speaker data . The phoneme classifier trained in such a manner effectively guides the generative model , trained only using the untranscribed speech data of the target speaker . By doing so , with the two modules independent of each other , we can achieve the text-to-speech model without any transcript of the target speaker . Guided-TTS consists of 4 modules : the unconditional DDPM , the phoneme classifier , the duration predictor , and the speaker encoder , as shown in Fig . 1 . Unconditional DDPM is a module that learns to generate mel-spectrogram unconditionally , and the remaining three modules are for TTS synthesis by guidance . We will explain the unconditional DDPM in Section 3.1 , followed by the method of guiding the unconditional model for TTS in Section 3.2 . 3.1 UNCONDITIONAL DDPM . Our unconditional DDPM models the unconditional distribution of speech PX without any transcript . We assume that the training data for the diffusion-based model has tens of hours of untranscribed speech data from the target speaker S for TTS . As our generative model learns only with speech data , training samples do not need to be aligned with the text . Thus , we use random chunks of untranscribed speech as training data to reduce the burden of not only speech transcription but also segmentation when only the long-form speech data is available for the target speaker . Given a mel-spectrogram X = X0 , we define a forward process as in Eq . ( 1 ) , which gradually corrupts data into noise , and approximate the reverse process in Eq . ( 2 ) by estimating the unconditional score ∇Xt log p ( Xt ) for each timestep t. At each iteration , Xt , t ∈ [ 0 , 1 ] is sampled from the mel-spectrogram X0 as in Eq . ( 3 ) , and score is estimated with the neural network sθ ( Xt , t ) parameterized by θ . The training objective of our unconditional model is in Eq . ( 4 ) . Similar to Grad-TTS ( Popov et al . ( 2021 ) ) , we regard mel-spectrogram as a 2D image with a single channel and use the U-Net architecture ( Ronneberger et al . ( 2015 ) ) as sθ . We use the same size of the architecture used to model 32× 32 sized images in Ho et al . ( 2020 ) to capture long-term dependencies without any text information , while Grad-TTS uses smaller architecture for the conditional distribution modeling . | Combines an unconditional denoising diffusion probabilistic model (DDPM) with a separately trained phoneme classifier to be able to guide the generation of mel spectogram towards a given aligned phoneme sequence, thus effectively having a conditional generative model. This is based on the same principles which were applied before on images (Song et al. (2021b)). | SP:9d7f87de4b36b5207ee016169a157694e05094e9 |
Decentralized Learning for Overparameterized Problems: A Multi-Agent Kernel Approximation Approach | ( N2/M ) bits ( resp . O ( N √ N/M ) real values ) to achieve minimax optimal generalization performance . Further , we show that the proposed algorithms can significantly reduce the communication complexity compared with state-of-the-art algorithms , for distributedly training models to fit UCI benchmarking datasets . Moreover , each agent needs to share about 200N/M bits to closely match the performance of the centralized algorithms , and these numbers are independent of parameter and feature dimension . 1 INTRODUCTION . Recently , decentralized optimization has become a mainstay of the optimization research . In decentralized optimization , multiple local agents hold small to moderately sized private datasets , and collaborate by iteratively solving their local problems while sharing some information with other agents . Most of the existing decentralized learning algorithms are deeply rooted in classical consensus-based approaches ( Tsitsiklis , 1984 ) , where the agents repetitively share the local parameters with each other to reach an optimal consensual solution . However , the recent trend of using learning models in the overparameterized regime with very high-dimensional parameters ( He et al. , 2016 ; Vaswani et al. , 2017 ; Fedus et al. , 2021 ) poses a significant challenge to such parameter sharing approaches , mainly because sharing model parameters iteratively becomes excessively expensive as the parameter dimension grows . If the size of local data is much smaller than that of the parameters , perhaps a more efficient way is to directly share the local data . However , this approach raises privacy concerns , and it is rarely used in practice . Therefore , a fundamental question of decentralized learning in the overparameterized regime is : ( Q ) For overparameterized learning problems , how to design decentralized algorithms that achieve the best optimization/generalization performance by exchanging minimum amount of information ? We partially answer ( Q ) in the context of distributed kernel learning ( Vert et al. , 2004 ) . We depart from the popular consensus-based algorithms and propose an optimization framework that does not require the local agents to share model parameters or raw data . We focus on kernel learning because : ( i ) kernel methods provide an elegant way to model non-linear learning problems with complex data dependencies as simple linear problems ( Vert et al. , 2004 ; Hofmann et al. , 2008 ) , and ( ii ) kernelbased methods can be used to capture the behavior of a fully-trained deep network with large width ( Jacot et al. , 2018 ; Arora et al. , 2019 ; 2020 ) . Table 1 : Comparison of the total communication required per node by different algorithms for non-overparameterized ( NOP ) and overparameterized ( OP ) regimes . Please see Appendix B for a detailed discussion of the algorithms . Here N is entire sample size , UB on M denotes the upper bound on the number of nodes , M , d is the data dimension , β ≥ 2 is a constant , and T denotes the total communication ( iterations ) rounds utilized by the distributed algorithms . Algorithm Kernel UB on M Communication ( Real Values ) NOP OP DKRR-CM ( Lin et al. , 2020 ) Any O ( N T+1 2 ( T+2 ) ) dTN dTN DKRR-RF-CM ( Liu et al. , 2021 ) RF O ( N T+1 2 ( T+2 ) ) O ( T √ N ) O ( TNβ ) Decentralized-RF ( Richards et al. , 2020 ) RF O ( N 1 3 ) O ( T √ N ) O ( TNβ ) DKLA/COKE ( Xu et al. , 2020 ) RF Any M T √ N O ( TNβ ) RF N √ N M O ( N1+β M ) Algorithm 2 ( this work ) GIP Any M O ( N2 M ) O ( N2 M ) Distributed implementation of kernel learning problems is challenging . Current state-of-the-art algorithms for kernel learning either rely on sharing raw data among agents and/or imposing restrictions on the number of agents ( Zhang et al. , 2015 ; Lin et al. , 2017 ; Koppel et al. , 2018 ; Lin et al. , 2020 ; Hu et al. , 2020 ; Pradhan et al. , 2021 ; Predd et al. , 2006 ) . Some recent approaches rely on specific random feature ( RF ) kernels to alleviate some of the above problems . These algorithms reformulate the ( approximate ) problem in the parameter domain and solve it by iteratively sharing the ( potentially high-dimensional ) parameters ( Bouboulis et al. , 2017 ; Richards et al. , 2020 ; Xu et al. , 2020 ; Liu et al. , 2021 ) 1 . These algorithms suffer from excessive communication overhead , especially in the overparameterized regime where the number of parameters is larger than the data size N . For example , implementing the neural tangent kernel ( NTK ) with RF kernel requires at least O ( Nβ ) , β ≥ 2 , random features ( parameter dimension ) using ReLU activation ( Arora et al. , 2019 ; Han et al. , 2021 ) 2 . For such problems , in this work , we propose a novel algorithmic framework for decentralized kernel learning . Below , we list the major contributions of our work . [ GIP Kernel for Distributed Approximation ] We define a new class of kernels suitable for distributed implementation , Generalized inner-product ( GIP ) kernel , that is fully characterized by the angle between a pair of feature vectors and their respective norms . Many kernels of practical importance including the NTK can be represented as GIP kernel . Further , we propose a multi-agent kernel approximation method for estimating the GIP and the popular RF kernels at individual agents . [ One-shot and Iterative Scheme ] Based on the proposed kernel approximation , we develop two optimization algorithms , where the first one only needs one-shot information exchange , but requires sharing data labels among the agents ; the second one needs iterative information exchange , but does not need to share the data labels . A key feature of these algorithms is that neither the raw data features nor the ( high-dimensional ) parameters are exchanged among agents . [ Performance of the Approximation Framework ] We analyze the optimization and the generalization performance of the proposed approximation algorithms for ` 2 loss . We show that GIP kernel requires communicating O ( N2/M ) bits and the RF kernel requires communicating O ( N √ N/M ) real values per agent to achieve minimax optimal generalization performance . Importantly , the required communication is independent of the function class and the optimization algorithm . We validate the performance of our approximation algorithms on UCI benchmarking datasets . In Table 1 , we compare the communication requirements of the proposed approach to popular distributed kernel learning algorithms . Specifically , DKRR-CM ( Lin et al. , 2020 ) relies on sharing data and is therefore not preferred in practical settings . For the RF kernel , the proposed algorithm outperforms other algorithms in both non-overparameterized and the overparameterized regimes when T > N/M . In the overparameterized regime the GIP kernel is more communication efficient compared to other algorithms . Finally , note that since our analysis is developed using the multiagent-kernel-approximation , it does not impose any upper bounds on the number of machines in the network . Notations : We use R , Rd , and Rn×m to denote the sets of real numbers , d-dimensional Euclidean space , and real matrices of size n×m , respectively . We use N to denote the set of natural numbers . 1For a detailed literature review please see Appendix B . 2To achieve approximation error = O ( 1/ √ N ) . N ( 0 , Σ ) is multivariate normal distribution with zero mean and covariance Σ . Uniform distribution with support [ a , b ] is denoted by U [ a , b ] . 〈a , b〉 ( resp . 〈a , b〉H ) denotes the inner-product in Euclidean space ( resp . Hilbert space H ) . The inner-product defines the usual norms in corresponding spaces . Norm ‖A‖ of matrix A denotes the operator norm induced by ` 2 vector norm . We denote by [ a ] i or [ a ] ( i ) the ith element of a vector a . [ A · a ] ( i ) j denotes the ( i · j ) th element of vector A · a . Moreover , A ( : ,i ) is the ith column of A and [ A ] mk is the element corresponding to mth row and kth column . Notation m ∈ [ M ] denotes m ∈ { 1 , .. , M } . Finally , 1 [ E ] is the indicator function of event E . 2 PROBLEM STATEMENT . Given a probability distribution π ( x , y ) over X × R , we want to minimize the population loss L ( f ) = Ex , y∼π ( x , y ) [ ` ( f ( x ) , y ) ] , ( 1 ) where x ∈ X ⊂ Rd and y ∈ R denote the features and the labels , respectively . Here , f : X → R is an estimate of the true label y . We consider a distributed system of M agents , with each agent m ∈ [ M ] having access to a locally available independently and identically distributed ( i.i.d ) dataset Nm = { x ( i ) m , y ( i ) m } ni=1 with3 ( x ( i ) m , y ( i ) m ) ∼ π ( x , y ) . The total number of samples is N = nM . The goal of kernel learning with kernel function , k ( · , · ) : X × X → R , is to find a function f ∈ H ( whereH is the reproducing kernel Hilbert space ( RKHS ) associated with k ( Vert et al. , 2004 ) ) that minimizes ( 1 ) . We aim to solve the following ( decentralized ) empirical risk minimization problem min f∈H { R̂ ( f ) = L̂ ( f ) + λ 2 ‖f‖2H = 1 M M∑ m=1 L̂m ( f ) + λ 2 ‖f‖2H } , ( 2 ) where λ > 0 is the regularization parameter and L̂m ( f ) = 1n ∑ i∈Nm ` ( f ( x ( i ) m ) , y ( i ) m ) is the local loss at each m ∈ [ M ] . Problem ( 2 ) can be reformulated using the Representer theorem ( Schölkopf et al. , 2002 ) with L̂m ( α ) = 1n ∑ i∈Nm ` ( [ Kα ] ( i ) m , y ( i ) m ) , ∀m ∈ [ M ] , as min α∈RN { R̂ ( α ) = L̂ ( α ) + λ 2 ‖α‖2K = 1 M M∑ m=1 L̂m ( α ) + λ 2 ‖α‖2K } , ( 3 ) where K ∈ RN×N is the kernel matrix with elements k ( x ( i ) m , x ( j ) m̄ ) , ∀m , m̄ ∈ [ M ] , ∀i ∈ Nm and ∀j ∈ Nm̄ . The supervised ( centralized ) learning problem ( 3 ) is a classical problem in statistical learning ( Caponnetto & De Vito , 2007 ) and has been popularized recently due to connections with overparameterized neural network training ( Jacot et al. , 2018 ; Arora et al. , 2019 ) . An alternate way to solve problem ( 2 ) ( and ( 3 ) ) is by parameterizing f in ( 2 ) by θ ∈ RD as fD ( x ; θ ) = 〈θ , φD ( x ) 〉 where φD : X → RD is a finite dimensional feature map . Here , φD ( · ) is designed to approximate k ( · , · ) with kD ( x , x′ ) = 〈φD ( x ) , φD ( x′ ) 〉 ( Rahimi & Recht , 2008 ) . Using this approximation , problem ( 2 ) ( and ( 3 ) ) can be written in the parameter domain with L̂m , D ( θ ) = 1n ∑ i∈Nm ` ( 〈θ , φD ( x ( i ) m ) 〉 , y ( i ) m ) , ∀m ∈ [ M ] , as min θ∈RD { R̂D ( θ ) = L̂D ( θ ) + λ 2 ‖θ‖2 = 1 M M∑ m=1 L̂m , D ( θ ) + λ 2 ‖θ‖2 } . ( 4 ) Note that ( 4 ) is a D-dimensional problem , whereas ( 3 ) is an N -dimensional problem . Since ( 4 ) is in the standard finite-sum form , it can be solved using the standard parameter sharing decentralized optimization algorithms ( e.g. , DGD ( Richards et al. , 2020 ) or ADMM ( Xu et al. , 2020 ) ) , which share D-dimensional vectors iteratively . However , when ( 4 ) is overparameterized with very large D ( e.g. , D = O ( Nβ ) with β ≥ 2 for the NTK ) , such parameter sharing approaches are no longer feasible because of the increased communication complexity . An intuitive solution to avoid sharing these high-dimensional parameters is to directly solve ( 3 ) . However , it is by no means clear if and how one can efficiently solve ( 3 ) in a decentralized manner . The key challenge is that , unlike the conventional decentralized learning problems , here each loss term ` ( [ Kα ] ( i ) m , y ( i ) m ) is not separable 3The techniques presented in this work can be easily extended to unbalanced datasets , i.e. , when each agent has a dataset of different size . over the agents . Instead , each agent m ’ s local problem is dependent on k ( x ( i ) m , x ( j ) m̄ ) with m 6= m̄ . Importantly , without directly transmitting the data itself ( as has been done in Predd et al . ( 2006 ) ; Koppel et al . ( 2018 ) ; Lin et al . ( 2020 ) ) , it is not clear how one can obtain the required ( m·i ) th element of Kα . Therefore , to develop algorithms that avoid sharing high-dimensional parameters by directly ( approximately ) solving ( 3 ) , it is important to identify kernels that are suitable for decentralized implementation and propose efficient algorithms for learning with such kernels . | This paper discusses a random feature-based multi-agent kernel learning approach. For both generalized inner-product (GIP) and random feature (RF) kernels, the authors propose, in each agent, to exchange the random feature matrix (instead of the model parameters). By considering the problem of kernel ridge regression, some theoretical results including the kernel matrix approximation error (Lemma 4.1), training (Theorem 4.2), and generalization performance (Theorem 4.3) are obtained in Section 4. Some numerical experiments on UCI datasets are provided in Section 5. The authors argue (e.g., in Corollary 1) that the proposed approach is more efficient as it requires less communication to achieve min-max optimal generalization performance. | SP:82d596bb2d0f828a948b930bb54ab18312fdf17e |
Decentralized Learning for Overparameterized Problems: A Multi-Agent Kernel Approximation Approach | ( N2/M ) bits ( resp . O ( N √ N/M ) real values ) to achieve minimax optimal generalization performance . Further , we show that the proposed algorithms can significantly reduce the communication complexity compared with state-of-the-art algorithms , for distributedly training models to fit UCI benchmarking datasets . Moreover , each agent needs to share about 200N/M bits to closely match the performance of the centralized algorithms , and these numbers are independent of parameter and feature dimension . 1 INTRODUCTION . Recently , decentralized optimization has become a mainstay of the optimization research . In decentralized optimization , multiple local agents hold small to moderately sized private datasets , and collaborate by iteratively solving their local problems while sharing some information with other agents . Most of the existing decentralized learning algorithms are deeply rooted in classical consensus-based approaches ( Tsitsiklis , 1984 ) , where the agents repetitively share the local parameters with each other to reach an optimal consensual solution . However , the recent trend of using learning models in the overparameterized regime with very high-dimensional parameters ( He et al. , 2016 ; Vaswani et al. , 2017 ; Fedus et al. , 2021 ) poses a significant challenge to such parameter sharing approaches , mainly because sharing model parameters iteratively becomes excessively expensive as the parameter dimension grows . If the size of local data is much smaller than that of the parameters , perhaps a more efficient way is to directly share the local data . However , this approach raises privacy concerns , and it is rarely used in practice . Therefore , a fundamental question of decentralized learning in the overparameterized regime is : ( Q ) For overparameterized learning problems , how to design decentralized algorithms that achieve the best optimization/generalization performance by exchanging minimum amount of information ? We partially answer ( Q ) in the context of distributed kernel learning ( Vert et al. , 2004 ) . We depart from the popular consensus-based algorithms and propose an optimization framework that does not require the local agents to share model parameters or raw data . We focus on kernel learning because : ( i ) kernel methods provide an elegant way to model non-linear learning problems with complex data dependencies as simple linear problems ( Vert et al. , 2004 ; Hofmann et al. , 2008 ) , and ( ii ) kernelbased methods can be used to capture the behavior of a fully-trained deep network with large width ( Jacot et al. , 2018 ; Arora et al. , 2019 ; 2020 ) . Table 1 : Comparison of the total communication required per node by different algorithms for non-overparameterized ( NOP ) and overparameterized ( OP ) regimes . Please see Appendix B for a detailed discussion of the algorithms . Here N is entire sample size , UB on M denotes the upper bound on the number of nodes , M , d is the data dimension , β ≥ 2 is a constant , and T denotes the total communication ( iterations ) rounds utilized by the distributed algorithms . Algorithm Kernel UB on M Communication ( Real Values ) NOP OP DKRR-CM ( Lin et al. , 2020 ) Any O ( N T+1 2 ( T+2 ) ) dTN dTN DKRR-RF-CM ( Liu et al. , 2021 ) RF O ( N T+1 2 ( T+2 ) ) O ( T √ N ) O ( TNβ ) Decentralized-RF ( Richards et al. , 2020 ) RF O ( N 1 3 ) O ( T √ N ) O ( TNβ ) DKLA/COKE ( Xu et al. , 2020 ) RF Any M T √ N O ( TNβ ) RF N √ N M O ( N1+β M ) Algorithm 2 ( this work ) GIP Any M O ( N2 M ) O ( N2 M ) Distributed implementation of kernel learning problems is challenging . Current state-of-the-art algorithms for kernel learning either rely on sharing raw data among agents and/or imposing restrictions on the number of agents ( Zhang et al. , 2015 ; Lin et al. , 2017 ; Koppel et al. , 2018 ; Lin et al. , 2020 ; Hu et al. , 2020 ; Pradhan et al. , 2021 ; Predd et al. , 2006 ) . Some recent approaches rely on specific random feature ( RF ) kernels to alleviate some of the above problems . These algorithms reformulate the ( approximate ) problem in the parameter domain and solve it by iteratively sharing the ( potentially high-dimensional ) parameters ( Bouboulis et al. , 2017 ; Richards et al. , 2020 ; Xu et al. , 2020 ; Liu et al. , 2021 ) 1 . These algorithms suffer from excessive communication overhead , especially in the overparameterized regime where the number of parameters is larger than the data size N . For example , implementing the neural tangent kernel ( NTK ) with RF kernel requires at least O ( Nβ ) , β ≥ 2 , random features ( parameter dimension ) using ReLU activation ( Arora et al. , 2019 ; Han et al. , 2021 ) 2 . For such problems , in this work , we propose a novel algorithmic framework for decentralized kernel learning . Below , we list the major contributions of our work . [ GIP Kernel for Distributed Approximation ] We define a new class of kernels suitable for distributed implementation , Generalized inner-product ( GIP ) kernel , that is fully characterized by the angle between a pair of feature vectors and their respective norms . Many kernels of practical importance including the NTK can be represented as GIP kernel . Further , we propose a multi-agent kernel approximation method for estimating the GIP and the popular RF kernels at individual agents . [ One-shot and Iterative Scheme ] Based on the proposed kernel approximation , we develop two optimization algorithms , where the first one only needs one-shot information exchange , but requires sharing data labels among the agents ; the second one needs iterative information exchange , but does not need to share the data labels . A key feature of these algorithms is that neither the raw data features nor the ( high-dimensional ) parameters are exchanged among agents . [ Performance of the Approximation Framework ] We analyze the optimization and the generalization performance of the proposed approximation algorithms for ` 2 loss . We show that GIP kernel requires communicating O ( N2/M ) bits and the RF kernel requires communicating O ( N √ N/M ) real values per agent to achieve minimax optimal generalization performance . Importantly , the required communication is independent of the function class and the optimization algorithm . We validate the performance of our approximation algorithms on UCI benchmarking datasets . In Table 1 , we compare the communication requirements of the proposed approach to popular distributed kernel learning algorithms . Specifically , DKRR-CM ( Lin et al. , 2020 ) relies on sharing data and is therefore not preferred in practical settings . For the RF kernel , the proposed algorithm outperforms other algorithms in both non-overparameterized and the overparameterized regimes when T > N/M . In the overparameterized regime the GIP kernel is more communication efficient compared to other algorithms . Finally , note that since our analysis is developed using the multiagent-kernel-approximation , it does not impose any upper bounds on the number of machines in the network . Notations : We use R , Rd , and Rn×m to denote the sets of real numbers , d-dimensional Euclidean space , and real matrices of size n×m , respectively . We use N to denote the set of natural numbers . 1For a detailed literature review please see Appendix B . 2To achieve approximation error = O ( 1/ √ N ) . N ( 0 , Σ ) is multivariate normal distribution with zero mean and covariance Σ . Uniform distribution with support [ a , b ] is denoted by U [ a , b ] . 〈a , b〉 ( resp . 〈a , b〉H ) denotes the inner-product in Euclidean space ( resp . Hilbert space H ) . The inner-product defines the usual norms in corresponding spaces . Norm ‖A‖ of matrix A denotes the operator norm induced by ` 2 vector norm . We denote by [ a ] i or [ a ] ( i ) the ith element of a vector a . [ A · a ] ( i ) j denotes the ( i · j ) th element of vector A · a . Moreover , A ( : ,i ) is the ith column of A and [ A ] mk is the element corresponding to mth row and kth column . Notation m ∈ [ M ] denotes m ∈ { 1 , .. , M } . Finally , 1 [ E ] is the indicator function of event E . 2 PROBLEM STATEMENT . Given a probability distribution π ( x , y ) over X × R , we want to minimize the population loss L ( f ) = Ex , y∼π ( x , y ) [ ` ( f ( x ) , y ) ] , ( 1 ) where x ∈ X ⊂ Rd and y ∈ R denote the features and the labels , respectively . Here , f : X → R is an estimate of the true label y . We consider a distributed system of M agents , with each agent m ∈ [ M ] having access to a locally available independently and identically distributed ( i.i.d ) dataset Nm = { x ( i ) m , y ( i ) m } ni=1 with3 ( x ( i ) m , y ( i ) m ) ∼ π ( x , y ) . The total number of samples is N = nM . The goal of kernel learning with kernel function , k ( · , · ) : X × X → R , is to find a function f ∈ H ( whereH is the reproducing kernel Hilbert space ( RKHS ) associated with k ( Vert et al. , 2004 ) ) that minimizes ( 1 ) . We aim to solve the following ( decentralized ) empirical risk minimization problem min f∈H { R̂ ( f ) = L̂ ( f ) + λ 2 ‖f‖2H = 1 M M∑ m=1 L̂m ( f ) + λ 2 ‖f‖2H } , ( 2 ) where λ > 0 is the regularization parameter and L̂m ( f ) = 1n ∑ i∈Nm ` ( f ( x ( i ) m ) , y ( i ) m ) is the local loss at each m ∈ [ M ] . Problem ( 2 ) can be reformulated using the Representer theorem ( Schölkopf et al. , 2002 ) with L̂m ( α ) = 1n ∑ i∈Nm ` ( [ Kα ] ( i ) m , y ( i ) m ) , ∀m ∈ [ M ] , as min α∈RN { R̂ ( α ) = L̂ ( α ) + λ 2 ‖α‖2K = 1 M M∑ m=1 L̂m ( α ) + λ 2 ‖α‖2K } , ( 3 ) where K ∈ RN×N is the kernel matrix with elements k ( x ( i ) m , x ( j ) m̄ ) , ∀m , m̄ ∈ [ M ] , ∀i ∈ Nm and ∀j ∈ Nm̄ . The supervised ( centralized ) learning problem ( 3 ) is a classical problem in statistical learning ( Caponnetto & De Vito , 2007 ) and has been popularized recently due to connections with overparameterized neural network training ( Jacot et al. , 2018 ; Arora et al. , 2019 ) . An alternate way to solve problem ( 2 ) ( and ( 3 ) ) is by parameterizing f in ( 2 ) by θ ∈ RD as fD ( x ; θ ) = 〈θ , φD ( x ) 〉 where φD : X → RD is a finite dimensional feature map . Here , φD ( · ) is designed to approximate k ( · , · ) with kD ( x , x′ ) = 〈φD ( x ) , φD ( x′ ) 〉 ( Rahimi & Recht , 2008 ) . Using this approximation , problem ( 2 ) ( and ( 3 ) ) can be written in the parameter domain with L̂m , D ( θ ) = 1n ∑ i∈Nm ` ( 〈θ , φD ( x ( i ) m ) 〉 , y ( i ) m ) , ∀m ∈ [ M ] , as min θ∈RD { R̂D ( θ ) = L̂D ( θ ) + λ 2 ‖θ‖2 = 1 M M∑ m=1 L̂m , D ( θ ) + λ 2 ‖θ‖2 } . ( 4 ) Note that ( 4 ) is a D-dimensional problem , whereas ( 3 ) is an N -dimensional problem . Since ( 4 ) is in the standard finite-sum form , it can be solved using the standard parameter sharing decentralized optimization algorithms ( e.g. , DGD ( Richards et al. , 2020 ) or ADMM ( Xu et al. , 2020 ) ) , which share D-dimensional vectors iteratively . However , when ( 4 ) is overparameterized with very large D ( e.g. , D = O ( Nβ ) with β ≥ 2 for the NTK ) , such parameter sharing approaches are no longer feasible because of the increased communication complexity . An intuitive solution to avoid sharing these high-dimensional parameters is to directly solve ( 3 ) . However , it is by no means clear if and how one can efficiently solve ( 3 ) in a decentralized manner . The key challenge is that , unlike the conventional decentralized learning problems , here each loss term ` ( [ Kα ] ( i ) m , y ( i ) m ) is not separable 3The techniques presented in this work can be easily extended to unbalanced datasets , i.e. , when each agent has a dataset of different size . over the agents . Instead , each agent m ’ s local problem is dependent on k ( x ( i ) m , x ( j ) m̄ ) with m 6= m̄ . Importantly , without directly transmitting the data itself ( as has been done in Predd et al . ( 2006 ) ; Koppel et al . ( 2018 ) ; Lin et al . ( 2020 ) ) , it is not clear how one can obtain the required ( m·i ) th element of Kα . Therefore , to develop algorithms that avoid sharing high-dimensional parameters by directly ( approximately ) solving ( 3 ) , it is important to identify kernels that are suitable for decentralized implementation and propose efficient algorithms for learning with such kernels . | The decentralization problem has data distributed among many agents and each agent is wanted to maintain some privacy. In this paper, the authors study the decentralized empirical risk minimization problem with reproducing kernel hilbert space. Two large classes of kernels are considered: (1) generalized inner-product (GIP) kernel based on arccosine kernel (proposed in this work), (2) random feature (RF) kernel. In order to attain decentralization, authors approximate kernels based on the inner product of two finite vectors, and propose algorithms (one-shot and iterative) to optimize private models. In addition, authors study in theory the approximation error for kernels, optimization algorithm performance and generalization error. Finally, experiments are presented to validate the algorithms and theoretical results. | SP:82d596bb2d0f828a948b930bb54ab18312fdf17e |
Decentralized Learning for Overparameterized Problems: A Multi-Agent Kernel Approximation Approach | ( N2/M ) bits ( resp . O ( N √ N/M ) real values ) to achieve minimax optimal generalization performance . Further , we show that the proposed algorithms can significantly reduce the communication complexity compared with state-of-the-art algorithms , for distributedly training models to fit UCI benchmarking datasets . Moreover , each agent needs to share about 200N/M bits to closely match the performance of the centralized algorithms , and these numbers are independent of parameter and feature dimension . 1 INTRODUCTION . Recently , decentralized optimization has become a mainstay of the optimization research . In decentralized optimization , multiple local agents hold small to moderately sized private datasets , and collaborate by iteratively solving their local problems while sharing some information with other agents . Most of the existing decentralized learning algorithms are deeply rooted in classical consensus-based approaches ( Tsitsiklis , 1984 ) , where the agents repetitively share the local parameters with each other to reach an optimal consensual solution . However , the recent trend of using learning models in the overparameterized regime with very high-dimensional parameters ( He et al. , 2016 ; Vaswani et al. , 2017 ; Fedus et al. , 2021 ) poses a significant challenge to such parameter sharing approaches , mainly because sharing model parameters iteratively becomes excessively expensive as the parameter dimension grows . If the size of local data is much smaller than that of the parameters , perhaps a more efficient way is to directly share the local data . However , this approach raises privacy concerns , and it is rarely used in practice . Therefore , a fundamental question of decentralized learning in the overparameterized regime is : ( Q ) For overparameterized learning problems , how to design decentralized algorithms that achieve the best optimization/generalization performance by exchanging minimum amount of information ? We partially answer ( Q ) in the context of distributed kernel learning ( Vert et al. , 2004 ) . We depart from the popular consensus-based algorithms and propose an optimization framework that does not require the local agents to share model parameters or raw data . We focus on kernel learning because : ( i ) kernel methods provide an elegant way to model non-linear learning problems with complex data dependencies as simple linear problems ( Vert et al. , 2004 ; Hofmann et al. , 2008 ) , and ( ii ) kernelbased methods can be used to capture the behavior of a fully-trained deep network with large width ( Jacot et al. , 2018 ; Arora et al. , 2019 ; 2020 ) . Table 1 : Comparison of the total communication required per node by different algorithms for non-overparameterized ( NOP ) and overparameterized ( OP ) regimes . Please see Appendix B for a detailed discussion of the algorithms . Here N is entire sample size , UB on M denotes the upper bound on the number of nodes , M , d is the data dimension , β ≥ 2 is a constant , and T denotes the total communication ( iterations ) rounds utilized by the distributed algorithms . Algorithm Kernel UB on M Communication ( Real Values ) NOP OP DKRR-CM ( Lin et al. , 2020 ) Any O ( N T+1 2 ( T+2 ) ) dTN dTN DKRR-RF-CM ( Liu et al. , 2021 ) RF O ( N T+1 2 ( T+2 ) ) O ( T √ N ) O ( TNβ ) Decentralized-RF ( Richards et al. , 2020 ) RF O ( N 1 3 ) O ( T √ N ) O ( TNβ ) DKLA/COKE ( Xu et al. , 2020 ) RF Any M T √ N O ( TNβ ) RF N √ N M O ( N1+β M ) Algorithm 2 ( this work ) GIP Any M O ( N2 M ) O ( N2 M ) Distributed implementation of kernel learning problems is challenging . Current state-of-the-art algorithms for kernel learning either rely on sharing raw data among agents and/or imposing restrictions on the number of agents ( Zhang et al. , 2015 ; Lin et al. , 2017 ; Koppel et al. , 2018 ; Lin et al. , 2020 ; Hu et al. , 2020 ; Pradhan et al. , 2021 ; Predd et al. , 2006 ) . Some recent approaches rely on specific random feature ( RF ) kernels to alleviate some of the above problems . These algorithms reformulate the ( approximate ) problem in the parameter domain and solve it by iteratively sharing the ( potentially high-dimensional ) parameters ( Bouboulis et al. , 2017 ; Richards et al. , 2020 ; Xu et al. , 2020 ; Liu et al. , 2021 ) 1 . These algorithms suffer from excessive communication overhead , especially in the overparameterized regime where the number of parameters is larger than the data size N . For example , implementing the neural tangent kernel ( NTK ) with RF kernel requires at least O ( Nβ ) , β ≥ 2 , random features ( parameter dimension ) using ReLU activation ( Arora et al. , 2019 ; Han et al. , 2021 ) 2 . For such problems , in this work , we propose a novel algorithmic framework for decentralized kernel learning . Below , we list the major contributions of our work . [ GIP Kernel for Distributed Approximation ] We define a new class of kernels suitable for distributed implementation , Generalized inner-product ( GIP ) kernel , that is fully characterized by the angle between a pair of feature vectors and their respective norms . Many kernels of practical importance including the NTK can be represented as GIP kernel . Further , we propose a multi-agent kernel approximation method for estimating the GIP and the popular RF kernels at individual agents . [ One-shot and Iterative Scheme ] Based on the proposed kernel approximation , we develop two optimization algorithms , where the first one only needs one-shot information exchange , but requires sharing data labels among the agents ; the second one needs iterative information exchange , but does not need to share the data labels . A key feature of these algorithms is that neither the raw data features nor the ( high-dimensional ) parameters are exchanged among agents . [ Performance of the Approximation Framework ] We analyze the optimization and the generalization performance of the proposed approximation algorithms for ` 2 loss . We show that GIP kernel requires communicating O ( N2/M ) bits and the RF kernel requires communicating O ( N √ N/M ) real values per agent to achieve minimax optimal generalization performance . Importantly , the required communication is independent of the function class and the optimization algorithm . We validate the performance of our approximation algorithms on UCI benchmarking datasets . In Table 1 , we compare the communication requirements of the proposed approach to popular distributed kernel learning algorithms . Specifically , DKRR-CM ( Lin et al. , 2020 ) relies on sharing data and is therefore not preferred in practical settings . For the RF kernel , the proposed algorithm outperforms other algorithms in both non-overparameterized and the overparameterized regimes when T > N/M . In the overparameterized regime the GIP kernel is more communication efficient compared to other algorithms . Finally , note that since our analysis is developed using the multiagent-kernel-approximation , it does not impose any upper bounds on the number of machines in the network . Notations : We use R , Rd , and Rn×m to denote the sets of real numbers , d-dimensional Euclidean space , and real matrices of size n×m , respectively . We use N to denote the set of natural numbers . 1For a detailed literature review please see Appendix B . 2To achieve approximation error = O ( 1/ √ N ) . N ( 0 , Σ ) is multivariate normal distribution with zero mean and covariance Σ . Uniform distribution with support [ a , b ] is denoted by U [ a , b ] . 〈a , b〉 ( resp . 〈a , b〉H ) denotes the inner-product in Euclidean space ( resp . Hilbert space H ) . The inner-product defines the usual norms in corresponding spaces . Norm ‖A‖ of matrix A denotes the operator norm induced by ` 2 vector norm . We denote by [ a ] i or [ a ] ( i ) the ith element of a vector a . [ A · a ] ( i ) j denotes the ( i · j ) th element of vector A · a . Moreover , A ( : ,i ) is the ith column of A and [ A ] mk is the element corresponding to mth row and kth column . Notation m ∈ [ M ] denotes m ∈ { 1 , .. , M } . Finally , 1 [ E ] is the indicator function of event E . 2 PROBLEM STATEMENT . Given a probability distribution π ( x , y ) over X × R , we want to minimize the population loss L ( f ) = Ex , y∼π ( x , y ) [ ` ( f ( x ) , y ) ] , ( 1 ) where x ∈ X ⊂ Rd and y ∈ R denote the features and the labels , respectively . Here , f : X → R is an estimate of the true label y . We consider a distributed system of M agents , with each agent m ∈ [ M ] having access to a locally available independently and identically distributed ( i.i.d ) dataset Nm = { x ( i ) m , y ( i ) m } ni=1 with3 ( x ( i ) m , y ( i ) m ) ∼ π ( x , y ) . The total number of samples is N = nM . The goal of kernel learning with kernel function , k ( · , · ) : X × X → R , is to find a function f ∈ H ( whereH is the reproducing kernel Hilbert space ( RKHS ) associated with k ( Vert et al. , 2004 ) ) that minimizes ( 1 ) . We aim to solve the following ( decentralized ) empirical risk minimization problem min f∈H { R̂ ( f ) = L̂ ( f ) + λ 2 ‖f‖2H = 1 M M∑ m=1 L̂m ( f ) + λ 2 ‖f‖2H } , ( 2 ) where λ > 0 is the regularization parameter and L̂m ( f ) = 1n ∑ i∈Nm ` ( f ( x ( i ) m ) , y ( i ) m ) is the local loss at each m ∈ [ M ] . Problem ( 2 ) can be reformulated using the Representer theorem ( Schölkopf et al. , 2002 ) with L̂m ( α ) = 1n ∑ i∈Nm ` ( [ Kα ] ( i ) m , y ( i ) m ) , ∀m ∈ [ M ] , as min α∈RN { R̂ ( α ) = L̂ ( α ) + λ 2 ‖α‖2K = 1 M M∑ m=1 L̂m ( α ) + λ 2 ‖α‖2K } , ( 3 ) where K ∈ RN×N is the kernel matrix with elements k ( x ( i ) m , x ( j ) m̄ ) , ∀m , m̄ ∈ [ M ] , ∀i ∈ Nm and ∀j ∈ Nm̄ . The supervised ( centralized ) learning problem ( 3 ) is a classical problem in statistical learning ( Caponnetto & De Vito , 2007 ) and has been popularized recently due to connections with overparameterized neural network training ( Jacot et al. , 2018 ; Arora et al. , 2019 ) . An alternate way to solve problem ( 2 ) ( and ( 3 ) ) is by parameterizing f in ( 2 ) by θ ∈ RD as fD ( x ; θ ) = 〈θ , φD ( x ) 〉 where φD : X → RD is a finite dimensional feature map . Here , φD ( · ) is designed to approximate k ( · , · ) with kD ( x , x′ ) = 〈φD ( x ) , φD ( x′ ) 〉 ( Rahimi & Recht , 2008 ) . Using this approximation , problem ( 2 ) ( and ( 3 ) ) can be written in the parameter domain with L̂m , D ( θ ) = 1n ∑ i∈Nm ` ( 〈θ , φD ( x ( i ) m ) 〉 , y ( i ) m ) , ∀m ∈ [ M ] , as min θ∈RD { R̂D ( θ ) = L̂D ( θ ) + λ 2 ‖θ‖2 = 1 M M∑ m=1 L̂m , D ( θ ) + λ 2 ‖θ‖2 } . ( 4 ) Note that ( 4 ) is a D-dimensional problem , whereas ( 3 ) is an N -dimensional problem . Since ( 4 ) is in the standard finite-sum form , it can be solved using the standard parameter sharing decentralized optimization algorithms ( e.g. , DGD ( Richards et al. , 2020 ) or ADMM ( Xu et al. , 2020 ) ) , which share D-dimensional vectors iteratively . However , when ( 4 ) is overparameterized with very large D ( e.g. , D = O ( Nβ ) with β ≥ 2 for the NTK ) , such parameter sharing approaches are no longer feasible because of the increased communication complexity . An intuitive solution to avoid sharing these high-dimensional parameters is to directly solve ( 3 ) . However , it is by no means clear if and how one can efficiently solve ( 3 ) in a decentralized manner . The key challenge is that , unlike the conventional decentralized learning problems , here each loss term ` ( [ Kα ] ( i ) m , y ( i ) m ) is not separable 3The techniques presented in this work can be easily extended to unbalanced datasets , i.e. , when each agent has a dataset of different size . over the agents . Instead , each agent m ’ s local problem is dependent on k ( x ( i ) m , x ( j ) m̄ ) with m 6= m̄ . Importantly , without directly transmitting the data itself ( as has been done in Predd et al . ( 2006 ) ; Koppel et al . ( 2018 ) ; Lin et al . ( 2020 ) ) , it is not clear how one can obtain the required ( m·i ) th element of Kα . Therefore , to develop algorithms that avoid sharing high-dimensional parameters by directly ( approximately ) solving ( 3 ) , it is important to identify kernels that are suitable for decentralized implementation and propose efficient algorithms for learning with such kernels . | This work considers multi-agent optimization over RKHS together with random feature approximations. The crux is the development of two different techniques for decentralized computation through a novel introduction of a generalized inner-product kernel. Convergence analysis and numerical validation are provided. | SP:82d596bb2d0f828a948b930bb54ab18312fdf17e |
Optimizer Amalgamation | 1 INTRODUCTION . Gradient-based optimization is ubiquitous in machine learning ; accordingly , a cottage industry of gradient-based optimizer design has emerged ( Schmidt et al. , 2020 ) . These optimizers generally propose algorithms that aim to make the “ best ” parameter update for a computed gradient ( Kingma & Ba , 2017 ; Liu et al. , 2020 ) , with some also modifying the location where the parameters are computed ( Zhang et al. , 2019b ) . However , each gradient-based optimizer claim specific problems where they hold performance advantages , none can claim to be universally superior . Due to the “ No Free Lunch ” theorem for optimization ( Wolpert & Macready , 1997 ) , no optimizer can provide better performance on a class of problems without somehow integrating problem-specific knowledge from that class . Furthermore , problems such as training neural networks are not homogeneous . In the spatial dimension , different layers or even parameters can have different behavior ( Chen et al. , 2020b ) . Also , as evidenced by the popularity of learning rate schedules , neural network optimization also behaves very differently in the temporal dimension as well ( Golatkar et al. , 2019 ) . This implies that no optimizer can provide the best performance for all parameters on a single problem or best performance over the entire optimization process . In order to build a stronger optimizer , we propose the new problem of optimizer amalgamation : how can we best combine a pool of multiple “ teacher ” optimizers , each of which might be good in certain cases , into a single stronger “ student ” optimizer that integrates their strengths and offsets their weaknesses ? Specifically , we wish for our combined optimizer to be adaptive both per-parameter and per-iteration , and exploit problem-specific knowledge to improve performance on a class of problems . To “ amalgamate ” an optimizer from a pool of optimizers , we draw inspiration from recent work in Learning to Optimize which provides a natural way to parameterize and train optimization update rules . In Learning to Optimize , optimizers are treated as policies to be learned from data . These “ learned ” optimizers are typically parameterized by a recurrent neural network ( Andrychowicz et al. , 2016 ; Lv et al. , 2017 ) ; then , the optimizer is meta-trained to minimize the loss of training problems , or “ optimizees ” , by gradient descent using truncated back-propagation through time . Yet to our best knowledge , no existing work has leveraged those learnable parameterizations to amalgamate and combine analytical optimizers . For our proposed formulation of optimizer amalgamation , we treat the learned optimizer as the amalgamation target . Then , we define amalgamation losses which can be used to combine feedback from multiple analytical optimizers into a single amalgamated optimizer , and present several amalgamation schemes . Finally , we explore smoothing methods that can be used during the amalgamation process to reduce the variance of the amalgamated optimizers . Our contributions are outlined below : • We formulate the new problem of optimizer amalgamation , which we define as finding a way to best amalgamate a pool of multiple analytical optimizers to produce a single stronger optimizer . We present three schemes of optimizer amalgamation : additive amalgamation , min-max amalgamation , and imitation of a trained choice . • We observe instability during the amalgamation process which leads to amalgamated optimizers having varied performance across multiple replicates . To mitigate this problem , we explore ways to reduce amalgamation variance by improving smoothness of the parameter space . We propose smoothing both by random noise or adversarial noise . • We present experiments showing extensive and consistent results that validate the effectiveness of our proposal . Specifically , we find that more advanced amalgamation techniques and weight space training noise lead better average case performance and reduced variance . We also show that our amalgamation method performs significantly better than previous methods on all problems , with few exceptions . 2 RELATED WORKS . Knowledge Distillation and Amalgamation The prototype of knowledge distillation was first introduced by ( Bucilua et al. , 2006 ) , which used it for model compression in order train neural networks ( “ students ” ) to imitate the output of more complex models ( “ teachers ” ) . Knowledge distillation was later formalized by ( Hinton et al. , 2015 ) , who added a temperature parameter to soften the teacher predictions and found significant performance gains as a result . The success of knowledge distillation spurred significant efforts to explain its effectiveness . Notably , Chen et al . ( 2020c ) ; Yuan et al . ( 2020 ) discovered that trained distillation teachers could be replaced by hand-crafted distributions . ( Yuan et al. , 2020 ) provided further theoretical and empirical explanation for this behavior by explicitly connecting Knowledge distillation to label smoothing , and ( Ma et al . ; Chen et al. , 2021b ) further credited the benefits of knowledge distillation to the improved smoothness of loss surfaces , which has been demonstrated to help adversarial training Cohen et al . ( 2019 ) ; Lecuyer et al . ( 2019 ) and the training of sparse neural networks Ma et al .. The potential of knowledge distillation to improve the training of neural networks also spurred diverse works extending knowledge distillation . For example , ( Romero et al. , 2015 ; Wang et al. , 2018 ; Shen et al. , 2018 ; 2019b ; Ye et al. , 2020b ) propose using intermediate feature representations as distillation targets instead of just network outputs , and ( Tarvainen & Valpola , 2017 ; Yang et al. , 2018 ; Zhang et al. , 2019a ) unify student and teacher network training to reduce computational costs . Knowledge distillation has also been extended to distilling multiple teachers , which is termed Knowledge Amalgamation ( Shen et al. , 2019a ; Luo et al. , 2019 ; Ye et al. , 2019 ; 2020a ) . Although using output logits from pre-trained networks has been extensively explored in knowledge distillation , we study a new direction of research distilling optimization knowledge from sophisticated analytical optimizers to produce stronger “ learned ” optimizers , hence the name “ optimizer amalgamation ” . Not only this is a new topic never studied by existing knowledge distillation literature , but also it needs to distill longitudinal output dynamics — not one final output — from multiple teachers . Learning to optimize Learning to Optimize is a branch of meta learning which proposes to replace hand-crafted analytical optimizers with learned optimizers trained by solving optimization problems , or optimizees . The concept was first introduced by ( Andrychowicz et al. , 2016 ) , who used a Long Short-Term Memory ( LSTM ) based model in order to parameterize gradient-based optimizers . This model took the loss gradient as its input and output a learned update rule which was then trained by gradient descent using truncated backpropagation through time . ( Andrychowicz et al. , 2016 ) also established a coordinate-wise design pattern , where the same LSTM weights are applied to each parameter of the optimizee in order to facilitate generalization to models with different architectures . Building on this architecture , Wichrowska et al . ( 2017 ) and Lv et al . ( 2017 ) proposed improvements such as hierarchical architectures connecting parameter RNNs together and augmenting the gradient with additional inputs . Many methods have also been proposed to improve the training of learned optimizers such as random scaling and convex augmentation ( Lv et al. , 2017 ) , curriculum learning and imitation learning ( Chen et al. , 2020a ) , and Jacobian regularization ( Li et al. , 2020 ) . Notably , Chen et al . ( 2020a ) also proposed a method of imitation learning , which can be viewed as a way of distilling a single analytical optimizer into a learned parameterization . Learning to Optimize has been extended to a variety of other problems such as graph convolutional networks ( You et al. , 2020 ) , domain generalization ( Chen et al. , 2020b ) , noisy label training ( Chen et al. , 2020c ) , and adversarial training ( Jiang et al. , 2018 ; Xiong & Hsieh , 2020 ) . Moving away from gradient-based optimization , black-box optimization has also been explored ( Chen et al. , 2017 ; Cao et al. , 2019 ; Shen et al. , 2021 ) . For a comprehensive survey with benchmarks , readers may refer to Chen et al . ( 2021a ) . Perturbations and Robustness The optimization process is naturally subject to many possible sources of noise , such as the stochastic gradient noise Devolder et al . ( 2011 ) ; Gorbunov et al . ( 2020 ) ; Simsekli et al . ( 2019 ) which is often highly non-Gaussian and heavy-tail in practice ; the random initialization and ( often non-optimal ) hyperparameter configuration ; the different local minimum reached each time in non-convex optimization Jain & Kar ( 2017 ) ; and the limited numerical precision in implementations De Sa et al . ( 2017 ) . The seen and unseen optimizees also constitute domain shifts in our case . In order for a consistent and reliable amalgamation process , the training needs to incorporate resistance to certain perturbations of the optimization process . We draw inspiration from deep learning defense against various random or malicious perturbations . For example , stability training Zheng et al . ( 2016 ) stabilizes deep networks against small input distortions by regularizing the feature divergence caused by adding random Gaussian noises to the inputs . Adversarial robustness measures the ability of a neural network to defend against malicious perturbations of its inputs ( Szegedy et al. , 2013 ; Goodfellow et al. , 2014 ) . For that purpose , random smoothening ( Lecuyer et al. , 2019 ; Cohen et al. , 2019 ) and adversarial training ( Madry et al. , 2017 ) have been found to increase model robustness with regard to random corruptions or worst-case perturbations ; as well as against testing-time domain shifts Ganin et al . ( 2016 ) . Recent work ( He et al. , 2019 ; Wu et al. , 2020 ) extends input perturbations to weight perturbations that explicitly regularize the flatness of the weight loss landscape , forming a double-perturbation mechanism for both inputs and weights . Other Approaches The problem of how to better train machine learning models has many diverse approaches outside the Learning to Optimize paradigm that we draw from , and forms the broader AutoML problem Hutter et al . ( 2018 ) together with model selection algorithms . Our approach falls under meta-learning , which also includes learned initialization approaches such as MAML ( Finn et al. , 2017 ) and Reptile ( Nichol et al. , 2018 ) . Other optimizer selection and tuning methods include hypergradient descent ( Baydin et al. , 2017 ) and bayesian hyperparameter optimization ( Snoek et al. , 2012 ) . Similar to our knowledge amalgamation approach , algorithm portfolio methods ( where many algorithms are available , and some subset is selected ) have also been applied to several problem domains such as Linear Programming ( Leyton-Brown et al. , 2003 ) and SAT solvers ( Xu et al. , 2008 ) . 3 OPTIMIZER AMALGAMATION . 3.1 MOTIVATION . Optimizer selection and hyperparameter optimization is a difficult task even for experts . With a vast number of optimizers to choose from with varying performance dependent on the specific problem and data ( Schmidt et al. , 2020 ) , most practitioners choose a reasonable default optimizer such as SGD or Adam and tune the learning rate to be “ good enough ” following some rule of thumb . As a consequence of the No Free Lunch theorem ( Wolpert & Macready , 1997 ) , the best optimizer to use for each problem , weight tensor within each problem , or each parameter may be different . In practice , different layers within a given neural network can benefit from differently tuned hyperparameters , for example by meta-tuning learning rates by layer ( Chen et al. , 2020b ) . Accordingly , we wish to train an optimizer which is sufficiently versatile and adaptive at different stages of training and even to each parameter individually . Many methods have been proposed to parameterize optimizers in learnable forms including coordinate-wise LSTMs Andrychowicz et al . ( 2016 ) ; Lv et al . ( 2017 ) , recurrent neural networks with hierarchical architectures Wichrowska et al . ( 2017 ) ; Metz et al . ( 2019 ) , and symbolically in terms of predefined blocks Bello et al . ( 2017 ) . Due to its high expressiveness and relative ease of training , we will use the workhorse of LSTM-based RNNProp architecture described by Lv et al . ( 2017 ) as our amalgamation target ; more details about this architecture can be found in Appendix C.2 . | This paper proposed a new problem called Optimizer Amalgamation and made an attempt to obtain a more powerful learned optimizer from several analytical optimizers. More concretely, three amalgamation losses are designed to train the amalgamated optimizer. In the meanwhile, two types of noise, random gaussian perturbation and projected gradient descent are incorporated into the training objective to increase the stability of the optimizer. The evaluation part compared the amalgamated optimizer with its original teachers, i.e., those analytical optimizers, and some L2O baselines on image classification tasks. | SP:f41fffaf218a8f17d0a04bfc678d94cfe24c5625 |
Optimizer Amalgamation | 1 INTRODUCTION . Gradient-based optimization is ubiquitous in machine learning ; accordingly , a cottage industry of gradient-based optimizer design has emerged ( Schmidt et al. , 2020 ) . These optimizers generally propose algorithms that aim to make the “ best ” parameter update for a computed gradient ( Kingma & Ba , 2017 ; Liu et al. , 2020 ) , with some also modifying the location where the parameters are computed ( Zhang et al. , 2019b ) . However , each gradient-based optimizer claim specific problems where they hold performance advantages , none can claim to be universally superior . Due to the “ No Free Lunch ” theorem for optimization ( Wolpert & Macready , 1997 ) , no optimizer can provide better performance on a class of problems without somehow integrating problem-specific knowledge from that class . Furthermore , problems such as training neural networks are not homogeneous . In the spatial dimension , different layers or even parameters can have different behavior ( Chen et al. , 2020b ) . Also , as evidenced by the popularity of learning rate schedules , neural network optimization also behaves very differently in the temporal dimension as well ( Golatkar et al. , 2019 ) . This implies that no optimizer can provide the best performance for all parameters on a single problem or best performance over the entire optimization process . In order to build a stronger optimizer , we propose the new problem of optimizer amalgamation : how can we best combine a pool of multiple “ teacher ” optimizers , each of which might be good in certain cases , into a single stronger “ student ” optimizer that integrates their strengths and offsets their weaknesses ? Specifically , we wish for our combined optimizer to be adaptive both per-parameter and per-iteration , and exploit problem-specific knowledge to improve performance on a class of problems . To “ amalgamate ” an optimizer from a pool of optimizers , we draw inspiration from recent work in Learning to Optimize which provides a natural way to parameterize and train optimization update rules . In Learning to Optimize , optimizers are treated as policies to be learned from data . These “ learned ” optimizers are typically parameterized by a recurrent neural network ( Andrychowicz et al. , 2016 ; Lv et al. , 2017 ) ; then , the optimizer is meta-trained to minimize the loss of training problems , or “ optimizees ” , by gradient descent using truncated back-propagation through time . Yet to our best knowledge , no existing work has leveraged those learnable parameterizations to amalgamate and combine analytical optimizers . For our proposed formulation of optimizer amalgamation , we treat the learned optimizer as the amalgamation target . Then , we define amalgamation losses which can be used to combine feedback from multiple analytical optimizers into a single amalgamated optimizer , and present several amalgamation schemes . Finally , we explore smoothing methods that can be used during the amalgamation process to reduce the variance of the amalgamated optimizers . Our contributions are outlined below : • We formulate the new problem of optimizer amalgamation , which we define as finding a way to best amalgamate a pool of multiple analytical optimizers to produce a single stronger optimizer . We present three schemes of optimizer amalgamation : additive amalgamation , min-max amalgamation , and imitation of a trained choice . • We observe instability during the amalgamation process which leads to amalgamated optimizers having varied performance across multiple replicates . To mitigate this problem , we explore ways to reduce amalgamation variance by improving smoothness of the parameter space . We propose smoothing both by random noise or adversarial noise . • We present experiments showing extensive and consistent results that validate the effectiveness of our proposal . Specifically , we find that more advanced amalgamation techniques and weight space training noise lead better average case performance and reduced variance . We also show that our amalgamation method performs significantly better than previous methods on all problems , with few exceptions . 2 RELATED WORKS . Knowledge Distillation and Amalgamation The prototype of knowledge distillation was first introduced by ( Bucilua et al. , 2006 ) , which used it for model compression in order train neural networks ( “ students ” ) to imitate the output of more complex models ( “ teachers ” ) . Knowledge distillation was later formalized by ( Hinton et al. , 2015 ) , who added a temperature parameter to soften the teacher predictions and found significant performance gains as a result . The success of knowledge distillation spurred significant efforts to explain its effectiveness . Notably , Chen et al . ( 2020c ) ; Yuan et al . ( 2020 ) discovered that trained distillation teachers could be replaced by hand-crafted distributions . ( Yuan et al. , 2020 ) provided further theoretical and empirical explanation for this behavior by explicitly connecting Knowledge distillation to label smoothing , and ( Ma et al . ; Chen et al. , 2021b ) further credited the benefits of knowledge distillation to the improved smoothness of loss surfaces , which has been demonstrated to help adversarial training Cohen et al . ( 2019 ) ; Lecuyer et al . ( 2019 ) and the training of sparse neural networks Ma et al .. The potential of knowledge distillation to improve the training of neural networks also spurred diverse works extending knowledge distillation . For example , ( Romero et al. , 2015 ; Wang et al. , 2018 ; Shen et al. , 2018 ; 2019b ; Ye et al. , 2020b ) propose using intermediate feature representations as distillation targets instead of just network outputs , and ( Tarvainen & Valpola , 2017 ; Yang et al. , 2018 ; Zhang et al. , 2019a ) unify student and teacher network training to reduce computational costs . Knowledge distillation has also been extended to distilling multiple teachers , which is termed Knowledge Amalgamation ( Shen et al. , 2019a ; Luo et al. , 2019 ; Ye et al. , 2019 ; 2020a ) . Although using output logits from pre-trained networks has been extensively explored in knowledge distillation , we study a new direction of research distilling optimization knowledge from sophisticated analytical optimizers to produce stronger “ learned ” optimizers , hence the name “ optimizer amalgamation ” . Not only this is a new topic never studied by existing knowledge distillation literature , but also it needs to distill longitudinal output dynamics — not one final output — from multiple teachers . Learning to optimize Learning to Optimize is a branch of meta learning which proposes to replace hand-crafted analytical optimizers with learned optimizers trained by solving optimization problems , or optimizees . The concept was first introduced by ( Andrychowicz et al. , 2016 ) , who used a Long Short-Term Memory ( LSTM ) based model in order to parameterize gradient-based optimizers . This model took the loss gradient as its input and output a learned update rule which was then trained by gradient descent using truncated backpropagation through time . ( Andrychowicz et al. , 2016 ) also established a coordinate-wise design pattern , where the same LSTM weights are applied to each parameter of the optimizee in order to facilitate generalization to models with different architectures . Building on this architecture , Wichrowska et al . ( 2017 ) and Lv et al . ( 2017 ) proposed improvements such as hierarchical architectures connecting parameter RNNs together and augmenting the gradient with additional inputs . Many methods have also been proposed to improve the training of learned optimizers such as random scaling and convex augmentation ( Lv et al. , 2017 ) , curriculum learning and imitation learning ( Chen et al. , 2020a ) , and Jacobian regularization ( Li et al. , 2020 ) . Notably , Chen et al . ( 2020a ) also proposed a method of imitation learning , which can be viewed as a way of distilling a single analytical optimizer into a learned parameterization . Learning to Optimize has been extended to a variety of other problems such as graph convolutional networks ( You et al. , 2020 ) , domain generalization ( Chen et al. , 2020b ) , noisy label training ( Chen et al. , 2020c ) , and adversarial training ( Jiang et al. , 2018 ; Xiong & Hsieh , 2020 ) . Moving away from gradient-based optimization , black-box optimization has also been explored ( Chen et al. , 2017 ; Cao et al. , 2019 ; Shen et al. , 2021 ) . For a comprehensive survey with benchmarks , readers may refer to Chen et al . ( 2021a ) . Perturbations and Robustness The optimization process is naturally subject to many possible sources of noise , such as the stochastic gradient noise Devolder et al . ( 2011 ) ; Gorbunov et al . ( 2020 ) ; Simsekli et al . ( 2019 ) which is often highly non-Gaussian and heavy-tail in practice ; the random initialization and ( often non-optimal ) hyperparameter configuration ; the different local minimum reached each time in non-convex optimization Jain & Kar ( 2017 ) ; and the limited numerical precision in implementations De Sa et al . ( 2017 ) . The seen and unseen optimizees also constitute domain shifts in our case . In order for a consistent and reliable amalgamation process , the training needs to incorporate resistance to certain perturbations of the optimization process . We draw inspiration from deep learning defense against various random or malicious perturbations . For example , stability training Zheng et al . ( 2016 ) stabilizes deep networks against small input distortions by regularizing the feature divergence caused by adding random Gaussian noises to the inputs . Adversarial robustness measures the ability of a neural network to defend against malicious perturbations of its inputs ( Szegedy et al. , 2013 ; Goodfellow et al. , 2014 ) . For that purpose , random smoothening ( Lecuyer et al. , 2019 ; Cohen et al. , 2019 ) and adversarial training ( Madry et al. , 2017 ) have been found to increase model robustness with regard to random corruptions or worst-case perturbations ; as well as against testing-time domain shifts Ganin et al . ( 2016 ) . Recent work ( He et al. , 2019 ; Wu et al. , 2020 ) extends input perturbations to weight perturbations that explicitly regularize the flatness of the weight loss landscape , forming a double-perturbation mechanism for both inputs and weights . Other Approaches The problem of how to better train machine learning models has many diverse approaches outside the Learning to Optimize paradigm that we draw from , and forms the broader AutoML problem Hutter et al . ( 2018 ) together with model selection algorithms . Our approach falls under meta-learning , which also includes learned initialization approaches such as MAML ( Finn et al. , 2017 ) and Reptile ( Nichol et al. , 2018 ) . Other optimizer selection and tuning methods include hypergradient descent ( Baydin et al. , 2017 ) and bayesian hyperparameter optimization ( Snoek et al. , 2012 ) . Similar to our knowledge amalgamation approach , algorithm portfolio methods ( where many algorithms are available , and some subset is selected ) have also been applied to several problem domains such as Linear Programming ( Leyton-Brown et al. , 2003 ) and SAT solvers ( Xu et al. , 2008 ) . 3 OPTIMIZER AMALGAMATION . 3.1 MOTIVATION . Optimizer selection and hyperparameter optimization is a difficult task even for experts . With a vast number of optimizers to choose from with varying performance dependent on the specific problem and data ( Schmidt et al. , 2020 ) , most practitioners choose a reasonable default optimizer such as SGD or Adam and tune the learning rate to be “ good enough ” following some rule of thumb . As a consequence of the No Free Lunch theorem ( Wolpert & Macready , 1997 ) , the best optimizer to use for each problem , weight tensor within each problem , or each parameter may be different . In practice , different layers within a given neural network can benefit from differently tuned hyperparameters , for example by meta-tuning learning rates by layer ( Chen et al. , 2020b ) . Accordingly , we wish to train an optimizer which is sufficiently versatile and adaptive at different stages of training and even to each parameter individually . Many methods have been proposed to parameterize optimizers in learnable forms including coordinate-wise LSTMs Andrychowicz et al . ( 2016 ) ; Lv et al . ( 2017 ) , recurrent neural networks with hierarchical architectures Wichrowska et al . ( 2017 ) ; Metz et al . ( 2019 ) , and symbolically in terms of predefined blocks Bello et al . ( 2017 ) . Due to its high expressiveness and relative ease of training , we will use the workhorse of LSTM-based RNNProp architecture described by Lv et al . ( 2017 ) as our amalgamation target ; more details about this architecture can be found in Appendix C.2 . | This paper discusses the problem of selecting a neural network optimizer from a pool of possible optimizers. They propose three variants of a meta-algorithm which combines optimizers from the pool, whereby a differentiable meta-loss is defined on the training loss achieved by selection protocol. They show that their algorithm, equipped with weight space training noise leads to better performance on a variety of problems. | SP:f41fffaf218a8f17d0a04bfc678d94cfe24c5625 |
Optimizer Amalgamation | 1 INTRODUCTION . Gradient-based optimization is ubiquitous in machine learning ; accordingly , a cottage industry of gradient-based optimizer design has emerged ( Schmidt et al. , 2020 ) . These optimizers generally propose algorithms that aim to make the “ best ” parameter update for a computed gradient ( Kingma & Ba , 2017 ; Liu et al. , 2020 ) , with some also modifying the location where the parameters are computed ( Zhang et al. , 2019b ) . However , each gradient-based optimizer claim specific problems where they hold performance advantages , none can claim to be universally superior . Due to the “ No Free Lunch ” theorem for optimization ( Wolpert & Macready , 1997 ) , no optimizer can provide better performance on a class of problems without somehow integrating problem-specific knowledge from that class . Furthermore , problems such as training neural networks are not homogeneous . In the spatial dimension , different layers or even parameters can have different behavior ( Chen et al. , 2020b ) . Also , as evidenced by the popularity of learning rate schedules , neural network optimization also behaves very differently in the temporal dimension as well ( Golatkar et al. , 2019 ) . This implies that no optimizer can provide the best performance for all parameters on a single problem or best performance over the entire optimization process . In order to build a stronger optimizer , we propose the new problem of optimizer amalgamation : how can we best combine a pool of multiple “ teacher ” optimizers , each of which might be good in certain cases , into a single stronger “ student ” optimizer that integrates their strengths and offsets their weaknesses ? Specifically , we wish for our combined optimizer to be adaptive both per-parameter and per-iteration , and exploit problem-specific knowledge to improve performance on a class of problems . To “ amalgamate ” an optimizer from a pool of optimizers , we draw inspiration from recent work in Learning to Optimize which provides a natural way to parameterize and train optimization update rules . In Learning to Optimize , optimizers are treated as policies to be learned from data . These “ learned ” optimizers are typically parameterized by a recurrent neural network ( Andrychowicz et al. , 2016 ; Lv et al. , 2017 ) ; then , the optimizer is meta-trained to minimize the loss of training problems , or “ optimizees ” , by gradient descent using truncated back-propagation through time . Yet to our best knowledge , no existing work has leveraged those learnable parameterizations to amalgamate and combine analytical optimizers . For our proposed formulation of optimizer amalgamation , we treat the learned optimizer as the amalgamation target . Then , we define amalgamation losses which can be used to combine feedback from multiple analytical optimizers into a single amalgamated optimizer , and present several amalgamation schemes . Finally , we explore smoothing methods that can be used during the amalgamation process to reduce the variance of the amalgamated optimizers . Our contributions are outlined below : • We formulate the new problem of optimizer amalgamation , which we define as finding a way to best amalgamate a pool of multiple analytical optimizers to produce a single stronger optimizer . We present three schemes of optimizer amalgamation : additive amalgamation , min-max amalgamation , and imitation of a trained choice . • We observe instability during the amalgamation process which leads to amalgamated optimizers having varied performance across multiple replicates . To mitigate this problem , we explore ways to reduce amalgamation variance by improving smoothness of the parameter space . We propose smoothing both by random noise or adversarial noise . • We present experiments showing extensive and consistent results that validate the effectiveness of our proposal . Specifically , we find that more advanced amalgamation techniques and weight space training noise lead better average case performance and reduced variance . We also show that our amalgamation method performs significantly better than previous methods on all problems , with few exceptions . 2 RELATED WORKS . Knowledge Distillation and Amalgamation The prototype of knowledge distillation was first introduced by ( Bucilua et al. , 2006 ) , which used it for model compression in order train neural networks ( “ students ” ) to imitate the output of more complex models ( “ teachers ” ) . Knowledge distillation was later formalized by ( Hinton et al. , 2015 ) , who added a temperature parameter to soften the teacher predictions and found significant performance gains as a result . The success of knowledge distillation spurred significant efforts to explain its effectiveness . Notably , Chen et al . ( 2020c ) ; Yuan et al . ( 2020 ) discovered that trained distillation teachers could be replaced by hand-crafted distributions . ( Yuan et al. , 2020 ) provided further theoretical and empirical explanation for this behavior by explicitly connecting Knowledge distillation to label smoothing , and ( Ma et al . ; Chen et al. , 2021b ) further credited the benefits of knowledge distillation to the improved smoothness of loss surfaces , which has been demonstrated to help adversarial training Cohen et al . ( 2019 ) ; Lecuyer et al . ( 2019 ) and the training of sparse neural networks Ma et al .. The potential of knowledge distillation to improve the training of neural networks also spurred diverse works extending knowledge distillation . For example , ( Romero et al. , 2015 ; Wang et al. , 2018 ; Shen et al. , 2018 ; 2019b ; Ye et al. , 2020b ) propose using intermediate feature representations as distillation targets instead of just network outputs , and ( Tarvainen & Valpola , 2017 ; Yang et al. , 2018 ; Zhang et al. , 2019a ) unify student and teacher network training to reduce computational costs . Knowledge distillation has also been extended to distilling multiple teachers , which is termed Knowledge Amalgamation ( Shen et al. , 2019a ; Luo et al. , 2019 ; Ye et al. , 2019 ; 2020a ) . Although using output logits from pre-trained networks has been extensively explored in knowledge distillation , we study a new direction of research distilling optimization knowledge from sophisticated analytical optimizers to produce stronger “ learned ” optimizers , hence the name “ optimizer amalgamation ” . Not only this is a new topic never studied by existing knowledge distillation literature , but also it needs to distill longitudinal output dynamics — not one final output — from multiple teachers . Learning to optimize Learning to Optimize is a branch of meta learning which proposes to replace hand-crafted analytical optimizers with learned optimizers trained by solving optimization problems , or optimizees . The concept was first introduced by ( Andrychowicz et al. , 2016 ) , who used a Long Short-Term Memory ( LSTM ) based model in order to parameterize gradient-based optimizers . This model took the loss gradient as its input and output a learned update rule which was then trained by gradient descent using truncated backpropagation through time . ( Andrychowicz et al. , 2016 ) also established a coordinate-wise design pattern , where the same LSTM weights are applied to each parameter of the optimizee in order to facilitate generalization to models with different architectures . Building on this architecture , Wichrowska et al . ( 2017 ) and Lv et al . ( 2017 ) proposed improvements such as hierarchical architectures connecting parameter RNNs together and augmenting the gradient with additional inputs . Many methods have also been proposed to improve the training of learned optimizers such as random scaling and convex augmentation ( Lv et al. , 2017 ) , curriculum learning and imitation learning ( Chen et al. , 2020a ) , and Jacobian regularization ( Li et al. , 2020 ) . Notably , Chen et al . ( 2020a ) also proposed a method of imitation learning , which can be viewed as a way of distilling a single analytical optimizer into a learned parameterization . Learning to Optimize has been extended to a variety of other problems such as graph convolutional networks ( You et al. , 2020 ) , domain generalization ( Chen et al. , 2020b ) , noisy label training ( Chen et al. , 2020c ) , and adversarial training ( Jiang et al. , 2018 ; Xiong & Hsieh , 2020 ) . Moving away from gradient-based optimization , black-box optimization has also been explored ( Chen et al. , 2017 ; Cao et al. , 2019 ; Shen et al. , 2021 ) . For a comprehensive survey with benchmarks , readers may refer to Chen et al . ( 2021a ) . Perturbations and Robustness The optimization process is naturally subject to many possible sources of noise , such as the stochastic gradient noise Devolder et al . ( 2011 ) ; Gorbunov et al . ( 2020 ) ; Simsekli et al . ( 2019 ) which is often highly non-Gaussian and heavy-tail in practice ; the random initialization and ( often non-optimal ) hyperparameter configuration ; the different local minimum reached each time in non-convex optimization Jain & Kar ( 2017 ) ; and the limited numerical precision in implementations De Sa et al . ( 2017 ) . The seen and unseen optimizees also constitute domain shifts in our case . In order for a consistent and reliable amalgamation process , the training needs to incorporate resistance to certain perturbations of the optimization process . We draw inspiration from deep learning defense against various random or malicious perturbations . For example , stability training Zheng et al . ( 2016 ) stabilizes deep networks against small input distortions by regularizing the feature divergence caused by adding random Gaussian noises to the inputs . Adversarial robustness measures the ability of a neural network to defend against malicious perturbations of its inputs ( Szegedy et al. , 2013 ; Goodfellow et al. , 2014 ) . For that purpose , random smoothening ( Lecuyer et al. , 2019 ; Cohen et al. , 2019 ) and adversarial training ( Madry et al. , 2017 ) have been found to increase model robustness with regard to random corruptions or worst-case perturbations ; as well as against testing-time domain shifts Ganin et al . ( 2016 ) . Recent work ( He et al. , 2019 ; Wu et al. , 2020 ) extends input perturbations to weight perturbations that explicitly regularize the flatness of the weight loss landscape , forming a double-perturbation mechanism for both inputs and weights . Other Approaches The problem of how to better train machine learning models has many diverse approaches outside the Learning to Optimize paradigm that we draw from , and forms the broader AutoML problem Hutter et al . ( 2018 ) together with model selection algorithms . Our approach falls under meta-learning , which also includes learned initialization approaches such as MAML ( Finn et al. , 2017 ) and Reptile ( Nichol et al. , 2018 ) . Other optimizer selection and tuning methods include hypergradient descent ( Baydin et al. , 2017 ) and bayesian hyperparameter optimization ( Snoek et al. , 2012 ) . Similar to our knowledge amalgamation approach , algorithm portfolio methods ( where many algorithms are available , and some subset is selected ) have also been applied to several problem domains such as Linear Programming ( Leyton-Brown et al. , 2003 ) and SAT solvers ( Xu et al. , 2008 ) . 3 OPTIMIZER AMALGAMATION . 3.1 MOTIVATION . Optimizer selection and hyperparameter optimization is a difficult task even for experts . With a vast number of optimizers to choose from with varying performance dependent on the specific problem and data ( Schmidt et al. , 2020 ) , most practitioners choose a reasonable default optimizer such as SGD or Adam and tune the learning rate to be “ good enough ” following some rule of thumb . As a consequence of the No Free Lunch theorem ( Wolpert & Macready , 1997 ) , the best optimizer to use for each problem , weight tensor within each problem , or each parameter may be different . In practice , different layers within a given neural network can benefit from differently tuned hyperparameters , for example by meta-tuning learning rates by layer ( Chen et al. , 2020b ) . Accordingly , we wish to train an optimizer which is sufficiently versatile and adaptive at different stages of training and even to each parameter individually . Many methods have been proposed to parameterize optimizers in learnable forms including coordinate-wise LSTMs Andrychowicz et al . ( 2016 ) ; Lv et al . ( 2017 ) , recurrent neural networks with hierarchical architectures Wichrowska et al . ( 2017 ) ; Metz et al . ( 2019 ) , and symbolically in terms of predefined blocks Bello et al . ( 2017 ) . Due to its high expressiveness and relative ease of training , we will use the workhorse of LSTM-based RNNProp architecture described by Lv et al . ( 2017 ) as our amalgamation target ; more details about this architecture can be found in Appendix C.2 . | This paper presents a new optimizer amalgamation method to combine a pool of optimizers into one in order to achieve stronger problem-specific performance. Three differentiable amalgamation mechanisms are designed and stabilization methods are explored. The proposed method is empirically shown to be effective when compared to a large number of baselines. | SP:f41fffaf218a8f17d0a04bfc678d94cfe24c5625 |
Introspective Learning : A Two-Stage approach for Inference in Neural Networks | 1 INTRODUCTION . Introspection is the act of looking into one ’ s own mind ( Boring , 1953 ) . Classical introspection has its roots in philosophy . Locke ( 1847 ) , the founder of empiricism , held that all human ideas come from experience . This experience is a result of both sensation and reflection . By sensation , one receives passive information using the sensory systems of sight , sound , and touch . Reflection is the objective observation of our own mental operations . Consider the task of differentiating a spoonbill from a flamingo and a crane . This task requires prior knowledge of some differentiating features between the birds . These features include the color and shape of the body , and beak of all birds . We first associate these features with our existing knowledge of birds and make a coarse decision that the given bird is a spoonbill . This is the sensing stage . Reflection involves questioning the coarse decision and asking why the bird can not be a flamingo or crane . If the answers are satisfactory , then an introspective decision that the bird is indeed a spoonbill is made . The observation of this reflection is introspection . In this paper , we adopt this differentiation between sensing and reflection to advocate for two-stage neural network architectures for perception-based applications . We first ground introspection based on existing neural networks . The above-mentioned task of differentiating a spoonbill from a flamingo and crane is provided in Fig . 1 for neural networks A network f ( · ) , is trained on a distribution X to classify data into N classes . The network learns notions about data samples when classifying them . These notions are stored as network weights W . Let yfeat be the logits projected before the final fully connected layer . We denote the final fully connected layer as fL , where L is the layer number . fL−1 is then the layer before the final fully connected layer . Using the weight parameters WL , the output of the network ŷ is given by , yfeat = fL−1 ( x ) , ∀yfeat ∈ < N×1 , ŷ = arg max ( WTL yfeat ) , ∀WL ∈ < dL−1×N , fL−1 ( x ) ∈ < dL−1×1 . ( 1 ) Hence , ŷ is the class in which the sensed features maximally correlate with the stored notion . This is the feed-forward prediction in Fig . 1 . Existing recognition architectures including VGG ( Simonyan & Zisserman , 2015 ) , ResNet ( He et al. , 2016 ) , and DenseNet ( Huang et al. , 2017 ) among others all sense and predict using Eq.1 . In Fig . 1 , we depict our proposed introspective learning framework . An additional reflection stage extracts the introspective features as Not Detect features from x . Let r1 and r2 be the two introspective features . In this case , r1 is the absence of the S-shaped neck in the spoonbill . And r2 is the lack of white feathers in the given input image . We use a post-hoc visual explanation to depict these features1 . Note that there can be N such features for a sensing network f ( · ) trained to differentiate between N classes . These features are then combined to obtain the final introspective feature rx . rx is characteristic of the input image x and is passed through an introspective network , H ( · ) , to obtain the introspective prediction ỹ . We term the combination of both f ( · ) andH ( · ) as introspective learning . Note that the proposed reflection stage does not require a predefined knowledge base of introspective features . Rather , ri , i ∈ [ 1 , N ] in the reflection stage , is extracted using the feed-forward prediction ŷ and the sensing network parameters . Hence , ri are Not Detect features based on f ( · ) ’ s notion of classes . H ( rx ) predicts ỹ explicitly based on the differences between f ( · ) ’ s notion of classes . Not only should the network sense the feed-forward patterns , it must also satisfy H ( · ) ’ s N notions of differences . In this paper , we show that the inference process is more generalizable due to these N additional inferential constraints . Specifically , the introspective network is more robust to noise and is less prone to calibration errors . The challenge is to implicitly extract features that answer introspective questions without explicitly training on said questions as is the norm in Visual Question Answering applications ( Antol et al. , 2015 ) . We show that gradients w.r.t network parameters store notions about the difference between classes and can be used as introspective features . We first describe the methodology of introspective feature extraction in Section 2 . We then analyzeH ( · ) in Section . 3 . We show thatH ( · ) as a simple multi-layer perceptron that introspects on ŷ is more generalizable for the application of recognition in Section 5 . We then illustrate the benefits of our two-stage architecture in other downstream tasks including out-of-distribution detection , active learning and image quality assessment in Section 6 . 2 INTROSPECTIVE FEATURES . In this section , we describe introspective features and implicitly extract them using the sensing network . We then analyze their extraction procedure and provide a methodology to accelerate it . Definition 2.1 ( Introspection ) . Given a network f ( · ) , a datum x , and the network ’ s prediction f ( x ) = ŷ , introspection in f ( · ) is the measurement of change induced in the network parameters when a label yI is introduced as the label for x . This measurement is the gradient induced by a loss function J ( yI , ŷ ) , w.r.t . the network parameters . This definition for introspection is in accordance with the sensing and reflection stages in Fig . 1 . The network ’ s prediction ŷ is the output of the sensing stage and the change induced by an introspective label , yI , is the network reflecting on its decision ŷ as opposed to yI . Combination of the two is introspection . Note that introspection can occur when ŷ is contrasted against any trained label yI , I ∈ [ 1 , N ] . For instance , in Fig . 1 , the network is asked to reflect on its decision of spoonbill by considering other yI that x can take - flamingo and crane . 1post-hoc explanations are justifications made by a neural network after a decision has been made . They require human interpretation . Further details are provided in Appendix A . Reflection is the empirical risk that the network has predicted x as ŷ instead of yI . Given the network parameters , this risk is measured through some loss function J ( yI , ŷ ) . yI is a one-hot vector with a one at the Ith location . The change that is induced in the network is given by the gradient of J ( yI , ŷ ) w.r.t . the network parameters . In this paper , we introspect based on reflecting on all possible classes . For an N -class classifier , there are N possible introspective classes and hence N possible gradients each given by , rI = ∇WJ ( yI , ŷ ) , I ∈ [ 1 , N ] . Here , rI are the introspective features . Since we introspect based on classes , we measure the change in network weights in the final fully connected layer . Hence the introspective features are given by , rI = ∇WLJ ( yI , ŷ ) , I ∈ [ 1 , N ] , rI ∈ < dL−1×N ( 2 ) where WL are the network weights for the final fully connected layer . Note that the final fully connected layer from Eq . 1 has a dimensionality of < dL−1×N . For every x , Eq . 2 is applied N times to obtain N separate rI . We first analyze these features before accelerating their extraction . 2.1 INTROSPECTIVE FEATURE ANALYSIS . Consider the extraction process in Eq . 2 . Each rI is a dL−1 ×N matrix . Expressing gradients in rI separately w.r.t . the different filters in WL , we have a row-wise concatenated set of gradients given by , rI = [ ∇WL,1J ( yI , ŷ ) ; ∇WL,2J ( yI , ŷ ) ; ∇WL,3J ( yI , ŷ ) . . .∇WL , NJ ( yI , ŷ ) ] ( 3 ) where each WL , j ∈ < dL−1×1 and rI ∈ < dL−1×N 2 . For all data x ∈ X the following lemma holds : Lemma 1 . Given a unique ordered pair ( x , ŷ ) and a trained network f ( · ) , the gradients for a loss function J ( yI , ŷ ) w.r.t . classes are pairwise orthogonal under the second-order Taylor series approximation , each class paired with the predicted class . Proof . Provided in Appendix B.1 . Lemma 1 states that backpropagating class yI does not provide any information to WL , j , j 6= I and hence there is no need to use ∇WL , jJ ( yj , ŷ ) , j 6= i as features when considering yI . In Appendix B.1 , we provide the complete proof when J ( yi , ŷ ) is the cross entropy loss . ∇WJ ( yI , ŷ ) for an introspective class reduces to , ∇WJ ( yI , ŷ ) = −∇W yI +∇W log ( y2ŷ 2 ) . ( 4 ) where yŷ is the logit associated with the predicted class . In Fig . 5 , we use a network trained on MNIST ( LeCun et al. , 1998 ) dataset to simulate a well-trained network and we visualize the gradients from the final fully connected layer to demonstrate Eq . 4 . Eq . 4 motivates the generalizable nature of our introspective features . Consider some noise added to x . To change the prediction ŷ , the noise must sufficiently decrease yŷ from Eq . 4 and increase the closest logit value , yI , to change the prediction . However , by constraining our final prediction ỹ from Fig . 1 on N such Eq . 4 , the noise needs to change the orthogonal relationship between N pairwise logits . This motivates a functionH ( · ) that is conditioned on N such pairwise logits . In Section . 5 , we empirically show the robustness of our feature set . 2.2 INTROSPECTIVE FEATURE EXTRACTION . From Lemma 1 , the introspective feature is only dependent on the predicted class ŷ and the introspective class yI making their span orthogonal to all other gradients . Hence rI = ∇WL , IJ ( yI , ŷ ) , I ∈ [ 1 , N ] , rI ∈ < dL−1×1 ( 5 ) Compare Eq . 5 against the introspective feature from Eq . 2 . Assuming that forward and backward passes through the final layer fL ( · ) are each ofO ( 1 ) time complexity , the feed-forward prediction for a given x is O ( 1 ) time complex . Given that f ( · ) is trained to classify between N classes , extracting N introspective features require N backpropagations and hence is O ( N ) complex . Each rI in Eq . 2 has a dimenisonality dL−1×N . Hence forN features , the space complexity isO ( N2×dL−1 ) . From Lemma 1 , the introspective feature is only dependent on the predicted class ŷ and the introspective class yI making their span orthogonal to all other gradients . For N introspective features in Eq . 5 , the space complexity of rI reduces from O ( dL−1 ×N2 ) to O ( dL−1 ×N ) . Note that the bottleneck in time complexity for N gradient extractions are the serial N backpropagations in Eq . 3 . Building on Lemma 1 , we present the following theorem . Theorem 1 . Given a unique ordered pair ( x , ŷ ) and a trained network f ( · ) , the gradients for a loss function J ( yI , f ( x ) ) , I ∈ [ 1 , N ] w.r.t . classes when yI are N orthogonal one-hot vectors is equivalent to when yI is a vector of all ones , under the second-order Taylor series approximation . Proof . Provided in Appendix B.2 . The proof follows Lemma 1 . Theorem 1 states that backpropagating a vector of all ones ( 1N ) is equivalent to backpropagating N one-hot vectors with ones at orthogonal positions . This reduces the time complexity from O ( N ) to a constant O ( 1 ) since we only require a single pass to backpropagate 1N . Hence , our introspective feature is given by , rx = ∇WLJ ( 1N , ŷ ) , rx ∈ < dL−1×N , 1N = 1N×1 ( 6 ) Note the LHS is now rx instead of rI from Eq . 5 . The final introspective feature is a matrix of the same size as WL extracted in O ( 1 ) with a space complexity of O ( dL−1 ×N ) . rx is vectorized and scaled between [ −1 , 1 ] before being used in Sections 5 and 6 as introspective features . | The authors present a novel 2-stage technique to improve the classification accuracy of feedforward ANNs. In particular, after the feedforward pass, a so-called introspective stage occurs, the goal of which is to ascertain why the particular class label was provided rather than a different label. This stage, which tends to improve the accuracy of the class predictions, is modular and can be added onto networks under varying task conditions. | SP:13b833179b9d5ac0da30b91c0aac879796f6c3ca |
Introspective Learning : A Two-Stage approach for Inference in Neural Networks | 1 INTRODUCTION . Introspection is the act of looking into one ’ s own mind ( Boring , 1953 ) . Classical introspection has its roots in philosophy . Locke ( 1847 ) , the founder of empiricism , held that all human ideas come from experience . This experience is a result of both sensation and reflection . By sensation , one receives passive information using the sensory systems of sight , sound , and touch . Reflection is the objective observation of our own mental operations . Consider the task of differentiating a spoonbill from a flamingo and a crane . This task requires prior knowledge of some differentiating features between the birds . These features include the color and shape of the body , and beak of all birds . We first associate these features with our existing knowledge of birds and make a coarse decision that the given bird is a spoonbill . This is the sensing stage . Reflection involves questioning the coarse decision and asking why the bird can not be a flamingo or crane . If the answers are satisfactory , then an introspective decision that the bird is indeed a spoonbill is made . The observation of this reflection is introspection . In this paper , we adopt this differentiation between sensing and reflection to advocate for two-stage neural network architectures for perception-based applications . We first ground introspection based on existing neural networks . The above-mentioned task of differentiating a spoonbill from a flamingo and crane is provided in Fig . 1 for neural networks A network f ( · ) , is trained on a distribution X to classify data into N classes . The network learns notions about data samples when classifying them . These notions are stored as network weights W . Let yfeat be the logits projected before the final fully connected layer . We denote the final fully connected layer as fL , where L is the layer number . fL−1 is then the layer before the final fully connected layer . Using the weight parameters WL , the output of the network ŷ is given by , yfeat = fL−1 ( x ) , ∀yfeat ∈ < N×1 , ŷ = arg max ( WTL yfeat ) , ∀WL ∈ < dL−1×N , fL−1 ( x ) ∈ < dL−1×1 . ( 1 ) Hence , ŷ is the class in which the sensed features maximally correlate with the stored notion . This is the feed-forward prediction in Fig . 1 . Existing recognition architectures including VGG ( Simonyan & Zisserman , 2015 ) , ResNet ( He et al. , 2016 ) , and DenseNet ( Huang et al. , 2017 ) among others all sense and predict using Eq.1 . In Fig . 1 , we depict our proposed introspective learning framework . An additional reflection stage extracts the introspective features as Not Detect features from x . Let r1 and r2 be the two introspective features . In this case , r1 is the absence of the S-shaped neck in the spoonbill . And r2 is the lack of white feathers in the given input image . We use a post-hoc visual explanation to depict these features1 . Note that there can be N such features for a sensing network f ( · ) trained to differentiate between N classes . These features are then combined to obtain the final introspective feature rx . rx is characteristic of the input image x and is passed through an introspective network , H ( · ) , to obtain the introspective prediction ỹ . We term the combination of both f ( · ) andH ( · ) as introspective learning . Note that the proposed reflection stage does not require a predefined knowledge base of introspective features . Rather , ri , i ∈ [ 1 , N ] in the reflection stage , is extracted using the feed-forward prediction ŷ and the sensing network parameters . Hence , ri are Not Detect features based on f ( · ) ’ s notion of classes . H ( rx ) predicts ỹ explicitly based on the differences between f ( · ) ’ s notion of classes . Not only should the network sense the feed-forward patterns , it must also satisfy H ( · ) ’ s N notions of differences . In this paper , we show that the inference process is more generalizable due to these N additional inferential constraints . Specifically , the introspective network is more robust to noise and is less prone to calibration errors . The challenge is to implicitly extract features that answer introspective questions without explicitly training on said questions as is the norm in Visual Question Answering applications ( Antol et al. , 2015 ) . We show that gradients w.r.t network parameters store notions about the difference between classes and can be used as introspective features . We first describe the methodology of introspective feature extraction in Section 2 . We then analyzeH ( · ) in Section . 3 . We show thatH ( · ) as a simple multi-layer perceptron that introspects on ŷ is more generalizable for the application of recognition in Section 5 . We then illustrate the benefits of our two-stage architecture in other downstream tasks including out-of-distribution detection , active learning and image quality assessment in Section 6 . 2 INTROSPECTIVE FEATURES . In this section , we describe introspective features and implicitly extract them using the sensing network . We then analyze their extraction procedure and provide a methodology to accelerate it . Definition 2.1 ( Introspection ) . Given a network f ( · ) , a datum x , and the network ’ s prediction f ( x ) = ŷ , introspection in f ( · ) is the measurement of change induced in the network parameters when a label yI is introduced as the label for x . This measurement is the gradient induced by a loss function J ( yI , ŷ ) , w.r.t . the network parameters . This definition for introspection is in accordance with the sensing and reflection stages in Fig . 1 . The network ’ s prediction ŷ is the output of the sensing stage and the change induced by an introspective label , yI , is the network reflecting on its decision ŷ as opposed to yI . Combination of the two is introspection . Note that introspection can occur when ŷ is contrasted against any trained label yI , I ∈ [ 1 , N ] . For instance , in Fig . 1 , the network is asked to reflect on its decision of spoonbill by considering other yI that x can take - flamingo and crane . 1post-hoc explanations are justifications made by a neural network after a decision has been made . They require human interpretation . Further details are provided in Appendix A . Reflection is the empirical risk that the network has predicted x as ŷ instead of yI . Given the network parameters , this risk is measured through some loss function J ( yI , ŷ ) . yI is a one-hot vector with a one at the Ith location . The change that is induced in the network is given by the gradient of J ( yI , ŷ ) w.r.t . the network parameters . In this paper , we introspect based on reflecting on all possible classes . For an N -class classifier , there are N possible introspective classes and hence N possible gradients each given by , rI = ∇WJ ( yI , ŷ ) , I ∈ [ 1 , N ] . Here , rI are the introspective features . Since we introspect based on classes , we measure the change in network weights in the final fully connected layer . Hence the introspective features are given by , rI = ∇WLJ ( yI , ŷ ) , I ∈ [ 1 , N ] , rI ∈ < dL−1×N ( 2 ) where WL are the network weights for the final fully connected layer . Note that the final fully connected layer from Eq . 1 has a dimensionality of < dL−1×N . For every x , Eq . 2 is applied N times to obtain N separate rI . We first analyze these features before accelerating their extraction . 2.1 INTROSPECTIVE FEATURE ANALYSIS . Consider the extraction process in Eq . 2 . Each rI is a dL−1 ×N matrix . Expressing gradients in rI separately w.r.t . the different filters in WL , we have a row-wise concatenated set of gradients given by , rI = [ ∇WL,1J ( yI , ŷ ) ; ∇WL,2J ( yI , ŷ ) ; ∇WL,3J ( yI , ŷ ) . . .∇WL , NJ ( yI , ŷ ) ] ( 3 ) where each WL , j ∈ < dL−1×1 and rI ∈ < dL−1×N 2 . For all data x ∈ X the following lemma holds : Lemma 1 . Given a unique ordered pair ( x , ŷ ) and a trained network f ( · ) , the gradients for a loss function J ( yI , ŷ ) w.r.t . classes are pairwise orthogonal under the second-order Taylor series approximation , each class paired with the predicted class . Proof . Provided in Appendix B.1 . Lemma 1 states that backpropagating class yI does not provide any information to WL , j , j 6= I and hence there is no need to use ∇WL , jJ ( yj , ŷ ) , j 6= i as features when considering yI . In Appendix B.1 , we provide the complete proof when J ( yi , ŷ ) is the cross entropy loss . ∇WJ ( yI , ŷ ) for an introspective class reduces to , ∇WJ ( yI , ŷ ) = −∇W yI +∇W log ( y2ŷ 2 ) . ( 4 ) where yŷ is the logit associated with the predicted class . In Fig . 5 , we use a network trained on MNIST ( LeCun et al. , 1998 ) dataset to simulate a well-trained network and we visualize the gradients from the final fully connected layer to demonstrate Eq . 4 . Eq . 4 motivates the generalizable nature of our introspective features . Consider some noise added to x . To change the prediction ŷ , the noise must sufficiently decrease yŷ from Eq . 4 and increase the closest logit value , yI , to change the prediction . However , by constraining our final prediction ỹ from Fig . 1 on N such Eq . 4 , the noise needs to change the orthogonal relationship between N pairwise logits . This motivates a functionH ( · ) that is conditioned on N such pairwise logits . In Section . 5 , we empirically show the robustness of our feature set . 2.2 INTROSPECTIVE FEATURE EXTRACTION . From Lemma 1 , the introspective feature is only dependent on the predicted class ŷ and the introspective class yI making their span orthogonal to all other gradients . Hence rI = ∇WL , IJ ( yI , ŷ ) , I ∈ [ 1 , N ] , rI ∈ < dL−1×1 ( 5 ) Compare Eq . 5 against the introspective feature from Eq . 2 . Assuming that forward and backward passes through the final layer fL ( · ) are each ofO ( 1 ) time complexity , the feed-forward prediction for a given x is O ( 1 ) time complex . Given that f ( · ) is trained to classify between N classes , extracting N introspective features require N backpropagations and hence is O ( N ) complex . Each rI in Eq . 2 has a dimenisonality dL−1×N . Hence forN features , the space complexity isO ( N2×dL−1 ) . From Lemma 1 , the introspective feature is only dependent on the predicted class ŷ and the introspective class yI making their span orthogonal to all other gradients . For N introspective features in Eq . 5 , the space complexity of rI reduces from O ( dL−1 ×N2 ) to O ( dL−1 ×N ) . Note that the bottleneck in time complexity for N gradient extractions are the serial N backpropagations in Eq . 3 . Building on Lemma 1 , we present the following theorem . Theorem 1 . Given a unique ordered pair ( x , ŷ ) and a trained network f ( · ) , the gradients for a loss function J ( yI , f ( x ) ) , I ∈ [ 1 , N ] w.r.t . classes when yI are N orthogonal one-hot vectors is equivalent to when yI is a vector of all ones , under the second-order Taylor series approximation . Proof . Provided in Appendix B.2 . The proof follows Lemma 1 . Theorem 1 states that backpropagating a vector of all ones ( 1N ) is equivalent to backpropagating N one-hot vectors with ones at orthogonal positions . This reduces the time complexity from O ( N ) to a constant O ( 1 ) since we only require a single pass to backpropagate 1N . Hence , our introspective feature is given by , rx = ∇WLJ ( 1N , ŷ ) , rx ∈ < dL−1×N , 1N = 1N×1 ( 6 ) Note the LHS is now rx instead of rI from Eq . 5 . The final introspective feature is a matrix of the same size as WL extracted in O ( 1 ) with a space complexity of O ( dL−1 ×N ) . rx is vectorized and scaled between [ −1 , 1 ] before being used in Sections 5 and 6 as introspective features . | In this paper, the authors propose a modification to standard neural networks used for object classification tasks to incorporate what they call “introspective learning”. This consists on training a multi-layer perceptron (MLP) on the neural network introspective features. These are obtained by calculating the gradients over the last layer weights of the model on a loss function corresponding to posing an introspective question: why is the correct label A instead of B? The authors apply this procedure to several neural networks of the ResNet model family and show that the introspective-networks have better generalization to distributional shifts and smaller calibration errors on datasets with the same classes and image sizes as CIFAR-10. Finally, they show further improvements in multiple applications. | SP:13b833179b9d5ac0da30b91c0aac879796f6c3ca |
Introspective Learning : A Two-Stage approach for Inference in Neural Networks | 1 INTRODUCTION . Introspection is the act of looking into one ’ s own mind ( Boring , 1953 ) . Classical introspection has its roots in philosophy . Locke ( 1847 ) , the founder of empiricism , held that all human ideas come from experience . This experience is a result of both sensation and reflection . By sensation , one receives passive information using the sensory systems of sight , sound , and touch . Reflection is the objective observation of our own mental operations . Consider the task of differentiating a spoonbill from a flamingo and a crane . This task requires prior knowledge of some differentiating features between the birds . These features include the color and shape of the body , and beak of all birds . We first associate these features with our existing knowledge of birds and make a coarse decision that the given bird is a spoonbill . This is the sensing stage . Reflection involves questioning the coarse decision and asking why the bird can not be a flamingo or crane . If the answers are satisfactory , then an introspective decision that the bird is indeed a spoonbill is made . The observation of this reflection is introspection . In this paper , we adopt this differentiation between sensing and reflection to advocate for two-stage neural network architectures for perception-based applications . We first ground introspection based on existing neural networks . The above-mentioned task of differentiating a spoonbill from a flamingo and crane is provided in Fig . 1 for neural networks A network f ( · ) , is trained on a distribution X to classify data into N classes . The network learns notions about data samples when classifying them . These notions are stored as network weights W . Let yfeat be the logits projected before the final fully connected layer . We denote the final fully connected layer as fL , where L is the layer number . fL−1 is then the layer before the final fully connected layer . Using the weight parameters WL , the output of the network ŷ is given by , yfeat = fL−1 ( x ) , ∀yfeat ∈ < N×1 , ŷ = arg max ( WTL yfeat ) , ∀WL ∈ < dL−1×N , fL−1 ( x ) ∈ < dL−1×1 . ( 1 ) Hence , ŷ is the class in which the sensed features maximally correlate with the stored notion . This is the feed-forward prediction in Fig . 1 . Existing recognition architectures including VGG ( Simonyan & Zisserman , 2015 ) , ResNet ( He et al. , 2016 ) , and DenseNet ( Huang et al. , 2017 ) among others all sense and predict using Eq.1 . In Fig . 1 , we depict our proposed introspective learning framework . An additional reflection stage extracts the introspective features as Not Detect features from x . Let r1 and r2 be the two introspective features . In this case , r1 is the absence of the S-shaped neck in the spoonbill . And r2 is the lack of white feathers in the given input image . We use a post-hoc visual explanation to depict these features1 . Note that there can be N such features for a sensing network f ( · ) trained to differentiate between N classes . These features are then combined to obtain the final introspective feature rx . rx is characteristic of the input image x and is passed through an introspective network , H ( · ) , to obtain the introspective prediction ỹ . We term the combination of both f ( · ) andH ( · ) as introspective learning . Note that the proposed reflection stage does not require a predefined knowledge base of introspective features . Rather , ri , i ∈ [ 1 , N ] in the reflection stage , is extracted using the feed-forward prediction ŷ and the sensing network parameters . Hence , ri are Not Detect features based on f ( · ) ’ s notion of classes . H ( rx ) predicts ỹ explicitly based on the differences between f ( · ) ’ s notion of classes . Not only should the network sense the feed-forward patterns , it must also satisfy H ( · ) ’ s N notions of differences . In this paper , we show that the inference process is more generalizable due to these N additional inferential constraints . Specifically , the introspective network is more robust to noise and is less prone to calibration errors . The challenge is to implicitly extract features that answer introspective questions without explicitly training on said questions as is the norm in Visual Question Answering applications ( Antol et al. , 2015 ) . We show that gradients w.r.t network parameters store notions about the difference between classes and can be used as introspective features . We first describe the methodology of introspective feature extraction in Section 2 . We then analyzeH ( · ) in Section . 3 . We show thatH ( · ) as a simple multi-layer perceptron that introspects on ŷ is more generalizable for the application of recognition in Section 5 . We then illustrate the benefits of our two-stage architecture in other downstream tasks including out-of-distribution detection , active learning and image quality assessment in Section 6 . 2 INTROSPECTIVE FEATURES . In this section , we describe introspective features and implicitly extract them using the sensing network . We then analyze their extraction procedure and provide a methodology to accelerate it . Definition 2.1 ( Introspection ) . Given a network f ( · ) , a datum x , and the network ’ s prediction f ( x ) = ŷ , introspection in f ( · ) is the measurement of change induced in the network parameters when a label yI is introduced as the label for x . This measurement is the gradient induced by a loss function J ( yI , ŷ ) , w.r.t . the network parameters . This definition for introspection is in accordance with the sensing and reflection stages in Fig . 1 . The network ’ s prediction ŷ is the output of the sensing stage and the change induced by an introspective label , yI , is the network reflecting on its decision ŷ as opposed to yI . Combination of the two is introspection . Note that introspection can occur when ŷ is contrasted against any trained label yI , I ∈ [ 1 , N ] . For instance , in Fig . 1 , the network is asked to reflect on its decision of spoonbill by considering other yI that x can take - flamingo and crane . 1post-hoc explanations are justifications made by a neural network after a decision has been made . They require human interpretation . Further details are provided in Appendix A . Reflection is the empirical risk that the network has predicted x as ŷ instead of yI . Given the network parameters , this risk is measured through some loss function J ( yI , ŷ ) . yI is a one-hot vector with a one at the Ith location . The change that is induced in the network is given by the gradient of J ( yI , ŷ ) w.r.t . the network parameters . In this paper , we introspect based on reflecting on all possible classes . For an N -class classifier , there are N possible introspective classes and hence N possible gradients each given by , rI = ∇WJ ( yI , ŷ ) , I ∈ [ 1 , N ] . Here , rI are the introspective features . Since we introspect based on classes , we measure the change in network weights in the final fully connected layer . Hence the introspective features are given by , rI = ∇WLJ ( yI , ŷ ) , I ∈ [ 1 , N ] , rI ∈ < dL−1×N ( 2 ) where WL are the network weights for the final fully connected layer . Note that the final fully connected layer from Eq . 1 has a dimensionality of < dL−1×N . For every x , Eq . 2 is applied N times to obtain N separate rI . We first analyze these features before accelerating their extraction . 2.1 INTROSPECTIVE FEATURE ANALYSIS . Consider the extraction process in Eq . 2 . Each rI is a dL−1 ×N matrix . Expressing gradients in rI separately w.r.t . the different filters in WL , we have a row-wise concatenated set of gradients given by , rI = [ ∇WL,1J ( yI , ŷ ) ; ∇WL,2J ( yI , ŷ ) ; ∇WL,3J ( yI , ŷ ) . . .∇WL , NJ ( yI , ŷ ) ] ( 3 ) where each WL , j ∈ < dL−1×1 and rI ∈ < dL−1×N 2 . For all data x ∈ X the following lemma holds : Lemma 1 . Given a unique ordered pair ( x , ŷ ) and a trained network f ( · ) , the gradients for a loss function J ( yI , ŷ ) w.r.t . classes are pairwise orthogonal under the second-order Taylor series approximation , each class paired with the predicted class . Proof . Provided in Appendix B.1 . Lemma 1 states that backpropagating class yI does not provide any information to WL , j , j 6= I and hence there is no need to use ∇WL , jJ ( yj , ŷ ) , j 6= i as features when considering yI . In Appendix B.1 , we provide the complete proof when J ( yi , ŷ ) is the cross entropy loss . ∇WJ ( yI , ŷ ) for an introspective class reduces to , ∇WJ ( yI , ŷ ) = −∇W yI +∇W log ( y2ŷ 2 ) . ( 4 ) where yŷ is the logit associated with the predicted class . In Fig . 5 , we use a network trained on MNIST ( LeCun et al. , 1998 ) dataset to simulate a well-trained network and we visualize the gradients from the final fully connected layer to demonstrate Eq . 4 . Eq . 4 motivates the generalizable nature of our introspective features . Consider some noise added to x . To change the prediction ŷ , the noise must sufficiently decrease yŷ from Eq . 4 and increase the closest logit value , yI , to change the prediction . However , by constraining our final prediction ỹ from Fig . 1 on N such Eq . 4 , the noise needs to change the orthogonal relationship between N pairwise logits . This motivates a functionH ( · ) that is conditioned on N such pairwise logits . In Section . 5 , we empirically show the robustness of our feature set . 2.2 INTROSPECTIVE FEATURE EXTRACTION . From Lemma 1 , the introspective feature is only dependent on the predicted class ŷ and the introspective class yI making their span orthogonal to all other gradients . Hence rI = ∇WL , IJ ( yI , ŷ ) , I ∈ [ 1 , N ] , rI ∈ < dL−1×1 ( 5 ) Compare Eq . 5 against the introspective feature from Eq . 2 . Assuming that forward and backward passes through the final layer fL ( · ) are each ofO ( 1 ) time complexity , the feed-forward prediction for a given x is O ( 1 ) time complex . Given that f ( · ) is trained to classify between N classes , extracting N introspective features require N backpropagations and hence is O ( N ) complex . Each rI in Eq . 2 has a dimenisonality dL−1×N . Hence forN features , the space complexity isO ( N2×dL−1 ) . From Lemma 1 , the introspective feature is only dependent on the predicted class ŷ and the introspective class yI making their span orthogonal to all other gradients . For N introspective features in Eq . 5 , the space complexity of rI reduces from O ( dL−1 ×N2 ) to O ( dL−1 ×N ) . Note that the bottleneck in time complexity for N gradient extractions are the serial N backpropagations in Eq . 3 . Building on Lemma 1 , we present the following theorem . Theorem 1 . Given a unique ordered pair ( x , ŷ ) and a trained network f ( · ) , the gradients for a loss function J ( yI , f ( x ) ) , I ∈ [ 1 , N ] w.r.t . classes when yI are N orthogonal one-hot vectors is equivalent to when yI is a vector of all ones , under the second-order Taylor series approximation . Proof . Provided in Appendix B.2 . The proof follows Lemma 1 . Theorem 1 states that backpropagating a vector of all ones ( 1N ) is equivalent to backpropagating N one-hot vectors with ones at orthogonal positions . This reduces the time complexity from O ( N ) to a constant O ( 1 ) since we only require a single pass to backpropagate 1N . Hence , our introspective feature is given by , rx = ∇WLJ ( 1N , ŷ ) , rx ∈ < dL−1×N , 1N = 1N×1 ( 6 ) Note the LHS is now rx instead of rI from Eq . 5 . The final introspective feature is a matrix of the same size as WL extracted in O ( 1 ) with a space complexity of O ( dL−1 ×N ) . rx is vectorized and scaled between [ −1 , 1 ] before being used in Sections 5 and 6 as introspective features . | The authors present a new inference pipeline for neural networks $f$ trained to solve classification problems. They augment the features processed by $f$ with the loss gradients with respect to the weights of the last layer. Their pipeline requires two steps: the first one coincides with a standard evaluation of $f(x)$, while gradients are evaluated and processed in the second step to derive the final prediction. Experiments show slight improvements in classification accuracy and cuts of calibration errors. | SP:13b833179b9d5ac0da30b91c0aac879796f6c3ca |
Iterative Memory Network for Long Sequential User Behavior Modeling in Recommender Systems | 1 INTRODUCTION . Click-through rate ( CTR ) prediction is critical for recommender systems . User sequential modeling is the key to mine users ’ interest for accurate predictions . As the user sequence gets longer , particularly with lengths longer than 1000 , the prediction task requires extraordinary long-range dependency modeling , efficient memory storage , acceptable training speed and real-time inference . Recurrent Neural Networks ( RNNs ) and the long short-term memory ( LSTM ) are employed by the early sequential recommenders ( Hidasi et al. , 2016 ; Hochreiter & Schmidhuber , 1997 ) . Graves et al . ( 2014 ) prove that LSTM forgets quickly and fails to generalize to sequences longer than 20 . Many empirical results also verify that RNN-based sequential recommenders are surpassed by attentionbased methods ( Zhou et al. , 2017 ; Kang & McAuley , 2018 ; Zhou et al. , 2018 ; Pi et al. , 2019 ) . Lately , the self-attention mechanism has proven to benefit a wide range of application domains , such as machine translation ( Vaswani et al. , 2017 ) , speech recognition ( Chan et al. , 2015 ) , reading comprehension ( Cui et al. , 2016 ; Lin et al. , 2017 ) and computer vision ( Xu et al. , 2015 ; Parmar et al. , 2019 ) . The self-attention mechanism attends to different positions in the sequence , captures the most important features and allows the model to handle long-range dependencies . SASRec adapts the self- attentive Transformer architecture for sequential recommenders and outperforms convolution-based and recurrence-based methods empirically ( Kang & McAuley , 2018 ) . However , SASRec has a quadratic complexity with respect to the sequence length , limiting its scalability to long sequences . Research on efficient self-attention is based on either sparse attention ( Li et al. , 2020 ; Zhou et al. , 2020 ; Child et al. , 2019 ; Kitaev et al. , 2020 ) or approximated attention ( Wang et al. , 2020 ) , and consequently incompetent against Transformer . SASRec is also unaware of the target item during the encoding process , which could harm its performance . Deep Interest Network ( DIN ) is the first architecture that adaptively learns the user interest representation from historical behaviors with respect to a particular target item ( Zhou et al. , 2018 ) . However , DIN does not model dependencies between elements in the sequence . Research has shown the importance of learning long-range intra-sequence dependencies in sequential modeling ( Vaswani et al. , 2017 ) . SASRec considers pairwise intra-sequence dependencies and the maximum length of signal traversal paths is O ( 1 ) , while DIN models no intra-sequence dependencies . In this paper , we propose the Iterative Memory Network ( IMN ) for long sequential user behavior modeling . The target item is a memory trigger in IMN , continuously eliciting relevant information from the sequence . IMN early crosses the target item and the user sequence , walks over the sequence for multiple iterations and repeatedly updates the memory vector with new information retrieved from the sequence . The contributions of this paper are summarized as follows : • We propose the Iterative Memory Network ( IMN ) , an end-to-end differentiable framework for sequential recommenders . To the best of our knowledge , it outperforms the state-of-the-art models for equal sequence lengths , with even more significant advantages on long sequential user behavior modeling . The framework is scalable to sequential recommenders with other objective functions . • IMN is efficient with O ( L ) complexity and O ( 1 ) number of sequential operations , making it scalable to long sequences . IMN requires significantly less training and inference time , with greater memory efficiency . Implemented on user sequences of length 1000 , IMN is deployed successfully on an industrial E-commerce platform with 1500 QPS . The inference time is 30ms . There is a significant 7.29 % CTR improvement over the DIN-based industrial baseline . • To the best of our knowledge , IMN is the first sequential recommender that early crosses the user sequence and the target item . Our ablation study validates empirically that early crossing before further sequence encoding benefits the model performance compared to delayed crossing . • IMN models both long-range intra-sequence dependencies and target-sequence dependencies in O ( L ) complexity . In IMN , the memory vector encapsulates the sequence content walked over . The multi-way attention between the sequence , the target item and the memory allows for intrasequence signal passing . Self-attention achieves this with O ( L2 ) complexity , and the target attention mechanism allows for no intra-sequence dependency modeling . 2 RELATED WORK . Sequential Recommender Systems . Sequential recommenders predict the user ’ s next behavior based on his past activities . Recurrent Neural Networks ( RNNs ) are introduced for early sequential recommenders ( Wu et al. , 2016 ; Hidasi et al. , 2016 ) . Yet , RNNs are difficult to parallelize and suffer from the problem of fast forgetting ( Graves et al. , 2014 ) . First introduced in the encoder-decoder framework , attention is shown effective to replace RNNs to encode sequences ( Bahdanau et al. , 2016 ; Vaswani et al. , 2017 ) . Attention-based recommender systems include self-attention based methods ( Kang & McAuley , 2018 ; Zhou et al. , 2017 ) , target-attention based methods ( Zhou et al. , 2018 ) and methods based on the integration between RNN and attention ( Zhou et al. , 2019 ) . Target-attention based methods learn the weights for sequence items with respect to the target item . Memory Networks . Memory Networks have wide applications in Question Answering ( QA ) , finding facts for a query from the knowledge database ( Chaudhari et al. , 2021 ) . Neural Turing Machines ( NTM ) introduces the addressing-read-write mechanism for memory searching and update ( Graves et al. , 2014 ) . Weston et al . ( 2014 ) proposes the general architecture for Memory Networks . DMN , DMTN and DMN+ are the subsequent research ( Kumar et al. , 2016 ; Xiong et al. , 2016 ; Ramachandran & Sohmshetty , 2017 ) . In recommender systems , MIMN uses GRU as the controller to update user memory slots with each new clicked item ( Pi et al. , 2019 ) . 3 PROBLEM FORMULATION . The recommender system models the user-item interaction as a matrix C = { cmn } M×N , where M and N are the total number of users and items respectively . The interaction is either explicit ratings ( Koren , 2009 ) or implicit feedback ( Agarwal et al. , 2009 ) . The CTR prediction task is usually based on implicit feedback . We denote u ∈ U as user and i ∈ I as item , and the user um clicking on the item in makes cmn 1 and others 0 . User sequential modeling predicts the probability of a user u ∈ U clicking on the target item i ∈ I based on his past behaviors , i1 , i2 , ... , iL , where L is the length of the user sequence . The sequence is usually in chronological order . Our paper focuses on long sequential user behavior modeling , where the length of the user behaviors L is on the scale of thousands . We use the term ‘ target-sequence dependencies ’ to refer to dependencies between the target item and the sequence items and ‘ intra-sequence dependencies ’ to refer to dependencies between the items within the sequence . 4 ITERATIVE MEMORY NETWORK . We illustrate the Iterative Memory Network architecture in Fig.1 . The encoder layer converts user behavior features , the temporal features , and the target item features to embedding vectors . The Iterative Memory Update module is the major component in our architecture , modeling both targetsequence dependencies and intra-sequence dependencies . The Memory Enhancement module repeatedly elicits clearer memory with the target item and outputs the final user interest vector . Next , we discuss the framework in detail . 4.1 ENCODER LAYER . We use ei to denote the embedding for the sequence item i . Each user behavior consists of not only the clicked item ei , but also the action time . Different users have different action patterns , thus the action time contains important temporal information . Since it is difficult to learn a good embedding directly with continuous time features , we bucketize the time feature into multiple granularities and perform categorical feature look-ups . We slice the elapsed time with respect to the ranking time into intervals whose gap length increases exponentially . In other words , we map the time in range [ 0,1 ) , [ 1,2 ) , [ 2,4 ) , ... , [ 2k , 2k+1 ) to categorical features 0,1,2 , ... k + 1 . Positional encodings are added to represent the relative positions of sequence items . The timestamp encoding and the positional encoding have the same dimension as that of the item embedding ei to be directly summed . We encode the jth user behavior e j b as ejb = ei ⊕ et ⊕ ep ( 1 ) where ⊕ denotes element-wise sum-up . We denote the embedding for the target item as vT . It shares the item id embedding look-up table with the item id embedding ei for the sequence . 4.2 MULTI-WAY ATTENTION . We calculate the similarities between the sequence item , the target item and the memory . Then we concatenate them into a single feature vector . Similarities are in both Euclidean distance and the Hadamard distance ( denoted by ◦ ) to capture multi-faceted similarity measures . α ( e , v , m ) = [ e−m , e− v , e ◦m , e ◦ v ] ( 2 ) where e is the embedding vector for sequence items , m is the memory embedding vector and v is the embedding vector for the target item . We input the feature vector into a two-layer point-wise feed-forward network , with sigmoid as the activation function . Here point-wise feed-forward means the fully-connected feed-forward network is applied to each sequence item separately and identically . a ( e , v , m ) = σ ( W ( 2 ) σ ( W ( 1 ) α ( e , v , m ) + b ( 1 ) ) + b ( 2 ) ) ( 3 ) We have also experimented with the softmax activation function for the second layer . There is very slight change in model performance . The multi-way attention mechanism encodes similarities between the user behavior sequence and the target item to extract the user ’ s interest specific to a particular target item . It also encodes similarities between sequence items and the memory . As the memory vector encapsulates the sequence information , the multi-way attention mechanism can model intra-sequence dependencies between items in the sequence and reduce the maximum length of signal traversal paths to O ( 1 ) . 4.3 ITERATIVE MEMORY UPDATE MODULE . In the Iterative Memory Update module , the target item acts as the memory trigger , continuously waking up relevant memory in the sequence . The memory vector is updated after each iteration with the relevant information retrieved from this iteration . We use a GRU to model the memory update process . The initial memory m0 is initialized from the target item vector , to represent that the user ’ s initial memory on the target item is the target item itself . m0 = vT ( 4 ) For iteration t , we calculate the user interest representation vtu as a weighted sum pooling of sequence item vectors , with weights derived from Eq. ( 3 ) . vtu = f ( vT , e 1 b , e 2 b , ... , e L b ) = L∑ j=1 a ( ejb , vT , m t−1 ) ejb = L∑ j=1 wje j b ( 5 ) where e1b , e 2 b , ... , e L b is the list of user behavior embedding vectors , vT is the embedding vector of target item T and m is the memory embedding vector . We use the user interest representation vtu after iteration t as the input to update the memory m t−1 . mt = GRU ( vtu , m t−1 ) ( 6 ) Since the memory is updated with a weighted sum pooling of sequence item embeddings , the memory contains information on the sequence items . As both the memory and the sequence contains information about the sequence items , the multi-way attention models intra-sequence dependencies . The architecture models co-occurence beyond the ( target item , sequence item ) pair . For example , the target item is rum and the user sequence contains lime and peppermint . With the target attention mechanism , both the attention weight between the pair ( peppermint , rum ) and that between the pair ( lime , rum ) are not high . With our IMN architecture , after the first pass of sequence walk , the memory vector contains information about both the peppermint and rum . When it meets the item lime again , the memory of peppermint and rum awakens the item lime since the triplet ( rum , peppermint , lime ) is the recipe for Mojito and likely to co-occur multiple times in different samples . Hence , with lime and peppermint in the sequence , the likelihood to click rum increases . While the target attention mechanism finds the items that co-occur frequently with the target item , our IMN finds the composite group of the user ’ s behavior items for the user to click the target item . Furthermore , IMN is structurally different from Memory Networks , which updates the memory with each incoming input and uses the updated memory for computations on the next input . The sequential nature makes it difficult to parallelize , hence infeasible to deploy on recommmender systems that need real-time responses . | This paper focuses on tackle the sequential recommendation task, where the authors proposed Iterative Memory Network (IMN), an end-to- end differentiable framework for long sequential user behaviour modeling. The main contribution of the paper is the IMN framework with efficient in memory and complexity. Specifically, the authors proposed iterative memory updates module with the multi-way attention and memory enhancement module. | SP:0fd97bc987bb46c31387da1cfedecff4f5deda9e |
Iterative Memory Network for Long Sequential User Behavior Modeling in Recommender Systems | 1 INTRODUCTION . Click-through rate ( CTR ) prediction is critical for recommender systems . User sequential modeling is the key to mine users ’ interest for accurate predictions . As the user sequence gets longer , particularly with lengths longer than 1000 , the prediction task requires extraordinary long-range dependency modeling , efficient memory storage , acceptable training speed and real-time inference . Recurrent Neural Networks ( RNNs ) and the long short-term memory ( LSTM ) are employed by the early sequential recommenders ( Hidasi et al. , 2016 ; Hochreiter & Schmidhuber , 1997 ) . Graves et al . ( 2014 ) prove that LSTM forgets quickly and fails to generalize to sequences longer than 20 . Many empirical results also verify that RNN-based sequential recommenders are surpassed by attentionbased methods ( Zhou et al. , 2017 ; Kang & McAuley , 2018 ; Zhou et al. , 2018 ; Pi et al. , 2019 ) . Lately , the self-attention mechanism has proven to benefit a wide range of application domains , such as machine translation ( Vaswani et al. , 2017 ) , speech recognition ( Chan et al. , 2015 ) , reading comprehension ( Cui et al. , 2016 ; Lin et al. , 2017 ) and computer vision ( Xu et al. , 2015 ; Parmar et al. , 2019 ) . The self-attention mechanism attends to different positions in the sequence , captures the most important features and allows the model to handle long-range dependencies . SASRec adapts the self- attentive Transformer architecture for sequential recommenders and outperforms convolution-based and recurrence-based methods empirically ( Kang & McAuley , 2018 ) . However , SASRec has a quadratic complexity with respect to the sequence length , limiting its scalability to long sequences . Research on efficient self-attention is based on either sparse attention ( Li et al. , 2020 ; Zhou et al. , 2020 ; Child et al. , 2019 ; Kitaev et al. , 2020 ) or approximated attention ( Wang et al. , 2020 ) , and consequently incompetent against Transformer . SASRec is also unaware of the target item during the encoding process , which could harm its performance . Deep Interest Network ( DIN ) is the first architecture that adaptively learns the user interest representation from historical behaviors with respect to a particular target item ( Zhou et al. , 2018 ) . However , DIN does not model dependencies between elements in the sequence . Research has shown the importance of learning long-range intra-sequence dependencies in sequential modeling ( Vaswani et al. , 2017 ) . SASRec considers pairwise intra-sequence dependencies and the maximum length of signal traversal paths is O ( 1 ) , while DIN models no intra-sequence dependencies . In this paper , we propose the Iterative Memory Network ( IMN ) for long sequential user behavior modeling . The target item is a memory trigger in IMN , continuously eliciting relevant information from the sequence . IMN early crosses the target item and the user sequence , walks over the sequence for multiple iterations and repeatedly updates the memory vector with new information retrieved from the sequence . The contributions of this paper are summarized as follows : • We propose the Iterative Memory Network ( IMN ) , an end-to-end differentiable framework for sequential recommenders . To the best of our knowledge , it outperforms the state-of-the-art models for equal sequence lengths , with even more significant advantages on long sequential user behavior modeling . The framework is scalable to sequential recommenders with other objective functions . • IMN is efficient with O ( L ) complexity and O ( 1 ) number of sequential operations , making it scalable to long sequences . IMN requires significantly less training and inference time , with greater memory efficiency . Implemented on user sequences of length 1000 , IMN is deployed successfully on an industrial E-commerce platform with 1500 QPS . The inference time is 30ms . There is a significant 7.29 % CTR improvement over the DIN-based industrial baseline . • To the best of our knowledge , IMN is the first sequential recommender that early crosses the user sequence and the target item . Our ablation study validates empirically that early crossing before further sequence encoding benefits the model performance compared to delayed crossing . • IMN models both long-range intra-sequence dependencies and target-sequence dependencies in O ( L ) complexity . In IMN , the memory vector encapsulates the sequence content walked over . The multi-way attention between the sequence , the target item and the memory allows for intrasequence signal passing . Self-attention achieves this with O ( L2 ) complexity , and the target attention mechanism allows for no intra-sequence dependency modeling . 2 RELATED WORK . Sequential Recommender Systems . Sequential recommenders predict the user ’ s next behavior based on his past activities . Recurrent Neural Networks ( RNNs ) are introduced for early sequential recommenders ( Wu et al. , 2016 ; Hidasi et al. , 2016 ) . Yet , RNNs are difficult to parallelize and suffer from the problem of fast forgetting ( Graves et al. , 2014 ) . First introduced in the encoder-decoder framework , attention is shown effective to replace RNNs to encode sequences ( Bahdanau et al. , 2016 ; Vaswani et al. , 2017 ) . Attention-based recommender systems include self-attention based methods ( Kang & McAuley , 2018 ; Zhou et al. , 2017 ) , target-attention based methods ( Zhou et al. , 2018 ) and methods based on the integration between RNN and attention ( Zhou et al. , 2019 ) . Target-attention based methods learn the weights for sequence items with respect to the target item . Memory Networks . Memory Networks have wide applications in Question Answering ( QA ) , finding facts for a query from the knowledge database ( Chaudhari et al. , 2021 ) . Neural Turing Machines ( NTM ) introduces the addressing-read-write mechanism for memory searching and update ( Graves et al. , 2014 ) . Weston et al . ( 2014 ) proposes the general architecture for Memory Networks . DMN , DMTN and DMN+ are the subsequent research ( Kumar et al. , 2016 ; Xiong et al. , 2016 ; Ramachandran & Sohmshetty , 2017 ) . In recommender systems , MIMN uses GRU as the controller to update user memory slots with each new clicked item ( Pi et al. , 2019 ) . 3 PROBLEM FORMULATION . The recommender system models the user-item interaction as a matrix C = { cmn } M×N , where M and N are the total number of users and items respectively . The interaction is either explicit ratings ( Koren , 2009 ) or implicit feedback ( Agarwal et al. , 2009 ) . The CTR prediction task is usually based on implicit feedback . We denote u ∈ U as user and i ∈ I as item , and the user um clicking on the item in makes cmn 1 and others 0 . User sequential modeling predicts the probability of a user u ∈ U clicking on the target item i ∈ I based on his past behaviors , i1 , i2 , ... , iL , where L is the length of the user sequence . The sequence is usually in chronological order . Our paper focuses on long sequential user behavior modeling , where the length of the user behaviors L is on the scale of thousands . We use the term ‘ target-sequence dependencies ’ to refer to dependencies between the target item and the sequence items and ‘ intra-sequence dependencies ’ to refer to dependencies between the items within the sequence . 4 ITERATIVE MEMORY NETWORK . We illustrate the Iterative Memory Network architecture in Fig.1 . The encoder layer converts user behavior features , the temporal features , and the target item features to embedding vectors . The Iterative Memory Update module is the major component in our architecture , modeling both targetsequence dependencies and intra-sequence dependencies . The Memory Enhancement module repeatedly elicits clearer memory with the target item and outputs the final user interest vector . Next , we discuss the framework in detail . 4.1 ENCODER LAYER . We use ei to denote the embedding for the sequence item i . Each user behavior consists of not only the clicked item ei , but also the action time . Different users have different action patterns , thus the action time contains important temporal information . Since it is difficult to learn a good embedding directly with continuous time features , we bucketize the time feature into multiple granularities and perform categorical feature look-ups . We slice the elapsed time with respect to the ranking time into intervals whose gap length increases exponentially . In other words , we map the time in range [ 0,1 ) , [ 1,2 ) , [ 2,4 ) , ... , [ 2k , 2k+1 ) to categorical features 0,1,2 , ... k + 1 . Positional encodings are added to represent the relative positions of sequence items . The timestamp encoding and the positional encoding have the same dimension as that of the item embedding ei to be directly summed . We encode the jth user behavior e j b as ejb = ei ⊕ et ⊕ ep ( 1 ) where ⊕ denotes element-wise sum-up . We denote the embedding for the target item as vT . It shares the item id embedding look-up table with the item id embedding ei for the sequence . 4.2 MULTI-WAY ATTENTION . We calculate the similarities between the sequence item , the target item and the memory . Then we concatenate them into a single feature vector . Similarities are in both Euclidean distance and the Hadamard distance ( denoted by ◦ ) to capture multi-faceted similarity measures . α ( e , v , m ) = [ e−m , e− v , e ◦m , e ◦ v ] ( 2 ) where e is the embedding vector for sequence items , m is the memory embedding vector and v is the embedding vector for the target item . We input the feature vector into a two-layer point-wise feed-forward network , with sigmoid as the activation function . Here point-wise feed-forward means the fully-connected feed-forward network is applied to each sequence item separately and identically . a ( e , v , m ) = σ ( W ( 2 ) σ ( W ( 1 ) α ( e , v , m ) + b ( 1 ) ) + b ( 2 ) ) ( 3 ) We have also experimented with the softmax activation function for the second layer . There is very slight change in model performance . The multi-way attention mechanism encodes similarities between the user behavior sequence and the target item to extract the user ’ s interest specific to a particular target item . It also encodes similarities between sequence items and the memory . As the memory vector encapsulates the sequence information , the multi-way attention mechanism can model intra-sequence dependencies between items in the sequence and reduce the maximum length of signal traversal paths to O ( 1 ) . 4.3 ITERATIVE MEMORY UPDATE MODULE . In the Iterative Memory Update module , the target item acts as the memory trigger , continuously waking up relevant memory in the sequence . The memory vector is updated after each iteration with the relevant information retrieved from this iteration . We use a GRU to model the memory update process . The initial memory m0 is initialized from the target item vector , to represent that the user ’ s initial memory on the target item is the target item itself . m0 = vT ( 4 ) For iteration t , we calculate the user interest representation vtu as a weighted sum pooling of sequence item vectors , with weights derived from Eq. ( 3 ) . vtu = f ( vT , e 1 b , e 2 b , ... , e L b ) = L∑ j=1 a ( ejb , vT , m t−1 ) ejb = L∑ j=1 wje j b ( 5 ) where e1b , e 2 b , ... , e L b is the list of user behavior embedding vectors , vT is the embedding vector of target item T and m is the memory embedding vector . We use the user interest representation vtu after iteration t as the input to update the memory m t−1 . mt = GRU ( vtu , m t−1 ) ( 6 ) Since the memory is updated with a weighted sum pooling of sequence item embeddings , the memory contains information on the sequence items . As both the memory and the sequence contains information about the sequence items , the multi-way attention models intra-sequence dependencies . The architecture models co-occurence beyond the ( target item , sequence item ) pair . For example , the target item is rum and the user sequence contains lime and peppermint . With the target attention mechanism , both the attention weight between the pair ( peppermint , rum ) and that between the pair ( lime , rum ) are not high . With our IMN architecture , after the first pass of sequence walk , the memory vector contains information about both the peppermint and rum . When it meets the item lime again , the memory of peppermint and rum awakens the item lime since the triplet ( rum , peppermint , lime ) is the recipe for Mojito and likely to co-occur multiple times in different samples . Hence , with lime and peppermint in the sequence , the likelihood to click rum increases . While the target attention mechanism finds the items that co-occur frequently with the target item , our IMN finds the composite group of the user ’ s behavior items for the user to click the target item . Furthermore , IMN is structurally different from Memory Networks , which updates the memory with each incoming input and uses the updated memory for computations on the next input . The sequential nature makes it difficult to parallelize , hence infeasible to deploy on recommmender systems that need real-time responses . | The paper talks about a simplification of transformer architectures. The original transformers employ a dense attention between all tokens in a sequence to focus on sequence comprehension. In this work, however, the focus is on the retrieval of the last item. It therefore only evaluates the attention between the proposed candidate item and the preceding items in user history. The other novelty as I see is the replacement of Residual connections between layers with a GRU layer, though the impact is less discussed. The work is successfully deployed in industrial recommender systems, showing significant improvements in the click-through rate. | SP:0fd97bc987bb46c31387da1cfedecff4f5deda9e |
Iterative Memory Network for Long Sequential User Behavior Modeling in Recommender Systems | 1 INTRODUCTION . Click-through rate ( CTR ) prediction is critical for recommender systems . User sequential modeling is the key to mine users ’ interest for accurate predictions . As the user sequence gets longer , particularly with lengths longer than 1000 , the prediction task requires extraordinary long-range dependency modeling , efficient memory storage , acceptable training speed and real-time inference . Recurrent Neural Networks ( RNNs ) and the long short-term memory ( LSTM ) are employed by the early sequential recommenders ( Hidasi et al. , 2016 ; Hochreiter & Schmidhuber , 1997 ) . Graves et al . ( 2014 ) prove that LSTM forgets quickly and fails to generalize to sequences longer than 20 . Many empirical results also verify that RNN-based sequential recommenders are surpassed by attentionbased methods ( Zhou et al. , 2017 ; Kang & McAuley , 2018 ; Zhou et al. , 2018 ; Pi et al. , 2019 ) . Lately , the self-attention mechanism has proven to benefit a wide range of application domains , such as machine translation ( Vaswani et al. , 2017 ) , speech recognition ( Chan et al. , 2015 ) , reading comprehension ( Cui et al. , 2016 ; Lin et al. , 2017 ) and computer vision ( Xu et al. , 2015 ; Parmar et al. , 2019 ) . The self-attention mechanism attends to different positions in the sequence , captures the most important features and allows the model to handle long-range dependencies . SASRec adapts the self- attentive Transformer architecture for sequential recommenders and outperforms convolution-based and recurrence-based methods empirically ( Kang & McAuley , 2018 ) . However , SASRec has a quadratic complexity with respect to the sequence length , limiting its scalability to long sequences . Research on efficient self-attention is based on either sparse attention ( Li et al. , 2020 ; Zhou et al. , 2020 ; Child et al. , 2019 ; Kitaev et al. , 2020 ) or approximated attention ( Wang et al. , 2020 ) , and consequently incompetent against Transformer . SASRec is also unaware of the target item during the encoding process , which could harm its performance . Deep Interest Network ( DIN ) is the first architecture that adaptively learns the user interest representation from historical behaviors with respect to a particular target item ( Zhou et al. , 2018 ) . However , DIN does not model dependencies between elements in the sequence . Research has shown the importance of learning long-range intra-sequence dependencies in sequential modeling ( Vaswani et al. , 2017 ) . SASRec considers pairwise intra-sequence dependencies and the maximum length of signal traversal paths is O ( 1 ) , while DIN models no intra-sequence dependencies . In this paper , we propose the Iterative Memory Network ( IMN ) for long sequential user behavior modeling . The target item is a memory trigger in IMN , continuously eliciting relevant information from the sequence . IMN early crosses the target item and the user sequence , walks over the sequence for multiple iterations and repeatedly updates the memory vector with new information retrieved from the sequence . The contributions of this paper are summarized as follows : • We propose the Iterative Memory Network ( IMN ) , an end-to-end differentiable framework for sequential recommenders . To the best of our knowledge , it outperforms the state-of-the-art models for equal sequence lengths , with even more significant advantages on long sequential user behavior modeling . The framework is scalable to sequential recommenders with other objective functions . • IMN is efficient with O ( L ) complexity and O ( 1 ) number of sequential operations , making it scalable to long sequences . IMN requires significantly less training and inference time , with greater memory efficiency . Implemented on user sequences of length 1000 , IMN is deployed successfully on an industrial E-commerce platform with 1500 QPS . The inference time is 30ms . There is a significant 7.29 % CTR improvement over the DIN-based industrial baseline . • To the best of our knowledge , IMN is the first sequential recommender that early crosses the user sequence and the target item . Our ablation study validates empirically that early crossing before further sequence encoding benefits the model performance compared to delayed crossing . • IMN models both long-range intra-sequence dependencies and target-sequence dependencies in O ( L ) complexity . In IMN , the memory vector encapsulates the sequence content walked over . The multi-way attention between the sequence , the target item and the memory allows for intrasequence signal passing . Self-attention achieves this with O ( L2 ) complexity , and the target attention mechanism allows for no intra-sequence dependency modeling . 2 RELATED WORK . Sequential Recommender Systems . Sequential recommenders predict the user ’ s next behavior based on his past activities . Recurrent Neural Networks ( RNNs ) are introduced for early sequential recommenders ( Wu et al. , 2016 ; Hidasi et al. , 2016 ) . Yet , RNNs are difficult to parallelize and suffer from the problem of fast forgetting ( Graves et al. , 2014 ) . First introduced in the encoder-decoder framework , attention is shown effective to replace RNNs to encode sequences ( Bahdanau et al. , 2016 ; Vaswani et al. , 2017 ) . Attention-based recommender systems include self-attention based methods ( Kang & McAuley , 2018 ; Zhou et al. , 2017 ) , target-attention based methods ( Zhou et al. , 2018 ) and methods based on the integration between RNN and attention ( Zhou et al. , 2019 ) . Target-attention based methods learn the weights for sequence items with respect to the target item . Memory Networks . Memory Networks have wide applications in Question Answering ( QA ) , finding facts for a query from the knowledge database ( Chaudhari et al. , 2021 ) . Neural Turing Machines ( NTM ) introduces the addressing-read-write mechanism for memory searching and update ( Graves et al. , 2014 ) . Weston et al . ( 2014 ) proposes the general architecture for Memory Networks . DMN , DMTN and DMN+ are the subsequent research ( Kumar et al. , 2016 ; Xiong et al. , 2016 ; Ramachandran & Sohmshetty , 2017 ) . In recommender systems , MIMN uses GRU as the controller to update user memory slots with each new clicked item ( Pi et al. , 2019 ) . 3 PROBLEM FORMULATION . The recommender system models the user-item interaction as a matrix C = { cmn } M×N , where M and N are the total number of users and items respectively . The interaction is either explicit ratings ( Koren , 2009 ) or implicit feedback ( Agarwal et al. , 2009 ) . The CTR prediction task is usually based on implicit feedback . We denote u ∈ U as user and i ∈ I as item , and the user um clicking on the item in makes cmn 1 and others 0 . User sequential modeling predicts the probability of a user u ∈ U clicking on the target item i ∈ I based on his past behaviors , i1 , i2 , ... , iL , where L is the length of the user sequence . The sequence is usually in chronological order . Our paper focuses on long sequential user behavior modeling , where the length of the user behaviors L is on the scale of thousands . We use the term ‘ target-sequence dependencies ’ to refer to dependencies between the target item and the sequence items and ‘ intra-sequence dependencies ’ to refer to dependencies between the items within the sequence . 4 ITERATIVE MEMORY NETWORK . We illustrate the Iterative Memory Network architecture in Fig.1 . The encoder layer converts user behavior features , the temporal features , and the target item features to embedding vectors . The Iterative Memory Update module is the major component in our architecture , modeling both targetsequence dependencies and intra-sequence dependencies . The Memory Enhancement module repeatedly elicits clearer memory with the target item and outputs the final user interest vector . Next , we discuss the framework in detail . 4.1 ENCODER LAYER . We use ei to denote the embedding for the sequence item i . Each user behavior consists of not only the clicked item ei , but also the action time . Different users have different action patterns , thus the action time contains important temporal information . Since it is difficult to learn a good embedding directly with continuous time features , we bucketize the time feature into multiple granularities and perform categorical feature look-ups . We slice the elapsed time with respect to the ranking time into intervals whose gap length increases exponentially . In other words , we map the time in range [ 0,1 ) , [ 1,2 ) , [ 2,4 ) , ... , [ 2k , 2k+1 ) to categorical features 0,1,2 , ... k + 1 . Positional encodings are added to represent the relative positions of sequence items . The timestamp encoding and the positional encoding have the same dimension as that of the item embedding ei to be directly summed . We encode the jth user behavior e j b as ejb = ei ⊕ et ⊕ ep ( 1 ) where ⊕ denotes element-wise sum-up . We denote the embedding for the target item as vT . It shares the item id embedding look-up table with the item id embedding ei for the sequence . 4.2 MULTI-WAY ATTENTION . We calculate the similarities between the sequence item , the target item and the memory . Then we concatenate them into a single feature vector . Similarities are in both Euclidean distance and the Hadamard distance ( denoted by ◦ ) to capture multi-faceted similarity measures . α ( e , v , m ) = [ e−m , e− v , e ◦m , e ◦ v ] ( 2 ) where e is the embedding vector for sequence items , m is the memory embedding vector and v is the embedding vector for the target item . We input the feature vector into a two-layer point-wise feed-forward network , with sigmoid as the activation function . Here point-wise feed-forward means the fully-connected feed-forward network is applied to each sequence item separately and identically . a ( e , v , m ) = σ ( W ( 2 ) σ ( W ( 1 ) α ( e , v , m ) + b ( 1 ) ) + b ( 2 ) ) ( 3 ) We have also experimented with the softmax activation function for the second layer . There is very slight change in model performance . The multi-way attention mechanism encodes similarities between the user behavior sequence and the target item to extract the user ’ s interest specific to a particular target item . It also encodes similarities between sequence items and the memory . As the memory vector encapsulates the sequence information , the multi-way attention mechanism can model intra-sequence dependencies between items in the sequence and reduce the maximum length of signal traversal paths to O ( 1 ) . 4.3 ITERATIVE MEMORY UPDATE MODULE . In the Iterative Memory Update module , the target item acts as the memory trigger , continuously waking up relevant memory in the sequence . The memory vector is updated after each iteration with the relevant information retrieved from this iteration . We use a GRU to model the memory update process . The initial memory m0 is initialized from the target item vector , to represent that the user ’ s initial memory on the target item is the target item itself . m0 = vT ( 4 ) For iteration t , we calculate the user interest representation vtu as a weighted sum pooling of sequence item vectors , with weights derived from Eq. ( 3 ) . vtu = f ( vT , e 1 b , e 2 b , ... , e L b ) = L∑ j=1 a ( ejb , vT , m t−1 ) ejb = L∑ j=1 wje j b ( 5 ) where e1b , e 2 b , ... , e L b is the list of user behavior embedding vectors , vT is the embedding vector of target item T and m is the memory embedding vector . We use the user interest representation vtu after iteration t as the input to update the memory m t−1 . mt = GRU ( vtu , m t−1 ) ( 6 ) Since the memory is updated with a weighted sum pooling of sequence item embeddings , the memory contains information on the sequence items . As both the memory and the sequence contains information about the sequence items , the multi-way attention models intra-sequence dependencies . The architecture models co-occurence beyond the ( target item , sequence item ) pair . For example , the target item is rum and the user sequence contains lime and peppermint . With the target attention mechanism , both the attention weight between the pair ( peppermint , rum ) and that between the pair ( lime , rum ) are not high . With our IMN architecture , after the first pass of sequence walk , the memory vector contains information about both the peppermint and rum . When it meets the item lime again , the memory of peppermint and rum awakens the item lime since the triplet ( rum , peppermint , lime ) is the recipe for Mojito and likely to co-occur multiple times in different samples . Hence , with lime and peppermint in the sequence , the likelihood to click rum increases . While the target attention mechanism finds the items that co-occur frequently with the target item , our IMN finds the composite group of the user ’ s behavior items for the user to click the target item . Furthermore , IMN is structurally different from Memory Networks , which updates the memory with each incoming input and uses the updated memory for computations on the next input . The sequential nature makes it difficult to parallelize , hence infeasible to deploy on recommmender systems that need real-time responses . | This paper proposes an "iterative memory network" for long user sequence modeling such as those for ctr prediction in ads and recommender systems. The basic idea is to use a iteratively updated memory vector to interact with items in the user sequence and target item, so there is higher-order interaction intra-sequence, without the need to use self-attention. Experiments are conducted on two offline datasets comparing with some attention and memory network based methods. | SP:0fd97bc987bb46c31387da1cfedecff4f5deda9e |
Path Integrals for the Attribution of Model Uncertainties | 1 INTRODUCTION . Estimating and understanding model uncertainties is of key importance in Bayesian inferential settings , which find applications in domains as diverse as natural language processing ( Xiao & Wang , 2019 ) , stochastic processes ( Rao & Teg , 2013 ) , network analysis ( Perez et al. , 2018 ) or image processing ( Kendall & Gal , 2017 ) , to name only a few . In contrast with model scores , model uncertainties manifest aspects of a system or data generating process that are not exactly known ( Hüllermeier & Waegeman , 2021 ) , and can be decomposed across aleatoric and epistemic components that help scrutinize different aspects in the functioning of a model , and can facilitate interpretability or fairness assessments in important machine learning applications ( Awasthi et al. , 2021 ) . Recently , there has been a growing interest in the study of methods for uncertainty estimation and decomposition ( e.g . Depeweg et al. , 2018 ; Smith & Gal , 2018 ; Van Amersfoort et al. , 2020 ; Tuna et al. , 2021 ) for purposes such as procuring adversarial examples , active learning or out-of-distribution detection . Most importantly , recent work has proposed counterfactual mechanisms for the interpretability of model uncertainties ( Van Looveren & Klaise , 2019 ; Antoran et al. , 2021 ; Schut et al. , 2021 ) , as well as their attribution to individual input features , such as pixels in an image . These methods proceed by identifying small adversarial or in-distribution variations in the raw input , s.t . predictive uncertainties in a model output are reduced . Then , attributions to individual pixels , words or categories are commonly assigned by direct comparison . This can facilitate the understanding of the strengths and weaknesses of varied probabilistic models , however , the optimization task to produce such counterfactuals requires a good balance between reducing uncertainties and minimising changes to original features , which is hard to achieve in practice . Most importantly , these methods do not satisfy commonly desired properties associated with modern importance attribution techniques ( Sundararajan et al. , 2017 ) , such as completeness or implementation invariance . In this paper , we present a novel framework for the attribution of predictive uncertainties , applicable to Bayesian differentiable models . We leverage path integrals ( Sundararajan et al. , 2017 ) along with in-distribution curves ( Jha et al. , 2020 ) , and we propose aggregating attributions over paths starting at a reference counterfactual which bears no predictive uncertainty . We ensure that completeness and additional desirable properties are satisfied , hence , uncertainties are completely explained by ( and decomposed over ) pixels in an image . We validate our approach by direct comparison with recently introduced counterfactual CLUE explanations ( Antoran et al. , 2021 ) , as well as popular interpretability methods , such as integrated gradients ( Sundararajan et al. , 2017 ) , LIME ( Ribeiro et al. , 2016 ) or kernelSHAP ( Lundberg & Lee , 2017 ) , which we adapt for the attribution of predictive uncertainties . Experiments1on benchmark image data sets show that , in comparison to competing alternatives , our proposed method offers sparse and easily interpretable attributions , always limited to relevant super-pixels . Thus , we offer improved means to understand the interplay between raw inputs and aleatoric/epistemic uncertainties in deep models . 2 UNCERTAINTY ATTRIBUTIONS . We focus our presentation on a classification task with a neural classifier f : Rn×W → ∆|C|−1 of a fixed architecture . The classifier maps feature vectors x ∈ Rn along with network weights w ∈ W to an element in the standard ( |C| − 1 ) -simplex , which represents membership probabilities across classes in a set C. On training f within an ( approximate ) Bayesian setting , we commonly obtain a posterior over the hypothesis space of models , i.e . a distribution π ( w|D ) over weights conditioned on the available train data D = { xi , ci } i=1,2 , .... Popular approaches to procure such posterior often differ in their approach to incorporate prior knowledge and include dropout ( Srivastava et al. , 2014 ) , Bayes-by-Backprop ( Blundell et al. , 2015 ) or SG-HMC ( Springenberg et al. , 2016 ) . A model score for classification with a new data point x⋆ ∈ Rn is derived from the posterior predictive distribution by marginalising over posterior weights , i.e . π ( x⋆|D ) = ∫ W f ( x⋆ , w ) π ( w|D ) dw = Ew|D [ f ( x⋆ , w ) ] , ( 1 ) and is easily approximated as 1N ∑N i=1 f ( x ⋆ , wi ) , with weight samples wi ∼ π ( w|D ) , i = 1 , . . . , N . In the following , we are concerned with the entropy as a measure of uncertainty : H ( x|D ) = − ∑ c∈C Ew|D [ fc ( x , w ) ] · logEw|D [ fc ( x , w ) ] ( 2 ) where fc ( x , w ) represents the probability of class-c membership . Remark . Concepts in this paper trivially extend to varied representations of uncertainty in classification and regression settings . Details are omitted for simplicity in the presentation . The entropy term in ( 2 ) may further be decomposed through the law of iterated variances ( Kendall & Gal , 2017 ) so as to yield an aleatoric term Ha ( x|D ) = Ew|D [ H ( x , w ) ] = − ∑ c∈C Ew|D [ fc ( x , w ) · log fc ( x , w ) ] , which measures the mean predictive entropy across models in the posterior hypothesis space , as well as the mutual information or epistemic term , He ( x|D ) = H ( x|D ) −Ha ( x|D ) that represents model uncertainty projected into the latent membership vector π ( x|D ) . Intuitively , aleatoric uncertainty represents natural stochastic variation in the observations over repeated experiments ; on the other hand , epistemic uncertainty is descriptive of model unknowns due to inadequate data or inappropriate modelling choices . 2.1 PATH INTEGRATED GRADIENTS . Path integrated gradients ( IG ) ( Sundararajan et al. , 2017 ) is a simple and popular method for importance attributions that differs from conventional feature removal and permutation techniques ( Covert et al. , 2020 ) , and is primarily targeted at image processing tasks . It is a practical and easy to implement alternative to layer-wise relevance propagation Montavon et al . ( 2019 ) or DeepLift ( Shrikumar et al. , 2017 ) ; it retains desired properties including sensitivity and implementation invariance , and has been extended to several adaptations ( Smilkov et al. , 2017 ; Jha et al. , 2020 ) . Given a classifier f ( · , w ) along with a feature vector x , path IG explains a model score f ( x , w ) using an alternative fiducial vector x0 as a reference , which is presumably not associated with any 1Source code for reproducing our results can be found at the following link : [ removed for blind review ] class observed in the training data . The attributed importance at index or pixel i is given by IGδi ( x|w ) = ∫ 1 0 ∂f ( δ ( α ) , w ) ∂δi ( α ) ∂δi ( α ) ∂α dα so that ∑ i IGi ( x|w ) = f ( x , w ) − f ( x0 , w ) . The result follows from the fundamental theorem of calculus for line integrals known as the gradient theorem . Here , δ : [ 0 , 1 ] → Rn represents a curve with endpoints at δ ( 0 ) = x0 and δ ( 1 ) = x . In vanilla IG , δ is parametrized as a straight path between fiducial and image feature vectors , i.e . δ ( α ) = x0 + α ( x− x0 ) , and the above simplifies to IGi ( x|w ) = ( xi − x0i ) × ∫ 1 0 ∂f ( x0 + α ( x− x0 ) , w ) ∂xi dα , so that importances are heavily influenced by differences in pixel values between x and x0 . However , a straight line often transitions the path x0 ⇝ x out-of-distribution or outside the datamanifold ( Jha et al. , 2020 ; Adebayo et al. , 2020 ) . Also , the fiducial choice is considered problematic ( Sundararajan et al. , 2017 ) and generally defaults to a black background . 2.2 INTEGRATED GRADIENTS WITH UNCERTAINTY . Commonly , the classifier f ( · , w ) is presumed to be binary ( Sundararajan et al. , 2017 ) with model scores constrained to the interval [ 0 , 1 ] . However , the above logic for importance attribution does easily generalize to multi-class Bayesian settings in the presence of uncertainty . Here , the posterior predictive classifier π ( x|D ) introduced in ( 1 ) accepts a path IG importance at index i given by IGδi ( x ) = ∫ 1 0 Ew|D [ ∂f ( δ ( α ) , w ) ∂δi ( α ) ] ∂δi ( α ) ∂α dα , which represents a mean-average trajectory over the curve δ and follows from dominated convergence . Most importantly , we may employ path IG to explain univariate measures of uncertainty from the posterior predictive distribution over classes , as observed in Figure 1 . Here , IG-Hδi ( x ) = − ∑ c∈C ∫ 1 0 ∆i ( α ) ∂δi ( α ) ∂α dα ( 3 ) captures variations in predictions covering multiple classes , and is defined s.t . ∆i ( α ) = ( 1 + logEw|D [ fc ( δ ( α ) , w ) ] ) · Ew|D [ ∂fc ( δ ( α ) , w ) ∂δi ( α ) ] attributes importances for the change in entropy between a fiducial point and a feature vector , and ∆i ( α ) = Ew|D [ ( 1 + log fc ( δ ( α ) , w ) ) · ∂fc ( δ ( α ) , w ) ∂δi ( α ) ] is the analogue representation restricted to the aleatoric term . Any variation in epistemic uncertainty is readily shown to be explained as the difference in importances between the above two terms . In Figure 1 , the goal is not to understand why the classifier suggests this picture refers to a dog ; instead , we comprehend why the model struggles to predict any single class with confidence , and we notice that the leash and a human hand are problematic . Further examples may be found in Appendix A ; in all cases , importances have been smoothed with a Gaussian filter , averaging over positive ( increasing uncertainty ) and negative ( decreasing uncertainty ) contributions . The attributions are easily computed by standard Bayesian procedures , approximating the inner expectations with simulations , however , the choice of fiducial ( black screen ) and out-of-distribution path remain controversial and a significant challenge in order to enable an intuitive understanding of predictive uncertainties in our model , which may represent a barrier in applications ( Antoran et al. , 2021 ) . 3 METHODOLOGY . Next , we describe the computational process summarized in Algorithm 1 , which produces novel in-distribution attributions of uncertainty for a feature vector x and predictive posterior π ( x|D ) in ( 1 ) . We do so through the use of a counterfactual fiducial bearing no relation to causal inference ( Pearl , 2010 ) . This counterfactual is an alternative vector x0 defined similarly to CLUEs in Antoran et al . ( 2021 ) , i.e . ( i ) in distribution and ( ii ) close to x according to some arbitrary distance metric . However , we furthermore require that the class distribution π ( x0|D ) bears close to 0 predictive uncertainty . Intuitively , we construct IG attributions using finely tuned fiducial points , by comparing ambiguous images to easily predicted counterparts that bear a significant resemblance . Algorithm 1 : Uncertainty attributions input : Feature vector x , predictive posterior π ( ·|D ) and uncertainty estimator H ( · ) . Distance metric d ( · , · ) , VAE encoder ϕ ( · ) and decoder ψ ( · ) . Penalty λ > > 0 and learning rate ν > 0. output : Uncertainty Attributions IG-Hi ( x ) , i = 1 , . . . , n. Initialise z0 = z = ϕµ ( x ) ; Compute predicted class ĉ = argmaxi πi ( x|D ) ; while L not converged do L ← d ( ψ ( z0 ) , x ) + 12m ∑ j z 2 j + λ log πĉ ( ψ ( z ) |D ) and z0 ← z0 − ν∇zL ; end while L not converged do L ← d ( ψ ( z ) , x ) + 12m ∑ j z 2 j and z ← z − ν∇zL ; end Approximate IG-Hδi ( x ) , i = 1 , . . . , n in ( 5 ) along δz0→z through trapezoidal integration . To begin with , we assume the existence of a generative variational auto-encoder ( VAE ) composed of an encoder ϕ : Rn → Rm and decoder ψ : Rm → Rn . As customary , the data-generating process in Rm is unit-Gaussian with an arbitrary dimensionality m < < n . | In this paper, the authors consider the question of how to explain model uncertainty in terms of the input features. Whereas prior work (e.g., CLUE) uses a counterfactual approach, the authors develop an attribution-based approach (assigning a score to each feature). The particular method is built on Integrated Gradients (IG): a baseline is identified that resembles the input but has no prediction uncertainty, and a non-straightline path is followed by traversing a VAE's latent space. The experiments suggest that the proposed method (IG-H) has some advantages relative to CLUE, as well as simple adaptations of IG/LIME/SHAP to uncertainty attribution. | SP:be6f343222debbb09755836c9adb2fab80f8c328 |
Path Integrals for the Attribution of Model Uncertainties | 1 INTRODUCTION . Estimating and understanding model uncertainties is of key importance in Bayesian inferential settings , which find applications in domains as diverse as natural language processing ( Xiao & Wang , 2019 ) , stochastic processes ( Rao & Teg , 2013 ) , network analysis ( Perez et al. , 2018 ) or image processing ( Kendall & Gal , 2017 ) , to name only a few . In contrast with model scores , model uncertainties manifest aspects of a system or data generating process that are not exactly known ( Hüllermeier & Waegeman , 2021 ) , and can be decomposed across aleatoric and epistemic components that help scrutinize different aspects in the functioning of a model , and can facilitate interpretability or fairness assessments in important machine learning applications ( Awasthi et al. , 2021 ) . Recently , there has been a growing interest in the study of methods for uncertainty estimation and decomposition ( e.g . Depeweg et al. , 2018 ; Smith & Gal , 2018 ; Van Amersfoort et al. , 2020 ; Tuna et al. , 2021 ) for purposes such as procuring adversarial examples , active learning or out-of-distribution detection . Most importantly , recent work has proposed counterfactual mechanisms for the interpretability of model uncertainties ( Van Looveren & Klaise , 2019 ; Antoran et al. , 2021 ; Schut et al. , 2021 ) , as well as their attribution to individual input features , such as pixels in an image . These methods proceed by identifying small adversarial or in-distribution variations in the raw input , s.t . predictive uncertainties in a model output are reduced . Then , attributions to individual pixels , words or categories are commonly assigned by direct comparison . This can facilitate the understanding of the strengths and weaknesses of varied probabilistic models , however , the optimization task to produce such counterfactuals requires a good balance between reducing uncertainties and minimising changes to original features , which is hard to achieve in practice . Most importantly , these methods do not satisfy commonly desired properties associated with modern importance attribution techniques ( Sundararajan et al. , 2017 ) , such as completeness or implementation invariance . In this paper , we present a novel framework for the attribution of predictive uncertainties , applicable to Bayesian differentiable models . We leverage path integrals ( Sundararajan et al. , 2017 ) along with in-distribution curves ( Jha et al. , 2020 ) , and we propose aggregating attributions over paths starting at a reference counterfactual which bears no predictive uncertainty . We ensure that completeness and additional desirable properties are satisfied , hence , uncertainties are completely explained by ( and decomposed over ) pixels in an image . We validate our approach by direct comparison with recently introduced counterfactual CLUE explanations ( Antoran et al. , 2021 ) , as well as popular interpretability methods , such as integrated gradients ( Sundararajan et al. , 2017 ) , LIME ( Ribeiro et al. , 2016 ) or kernelSHAP ( Lundberg & Lee , 2017 ) , which we adapt for the attribution of predictive uncertainties . Experiments1on benchmark image data sets show that , in comparison to competing alternatives , our proposed method offers sparse and easily interpretable attributions , always limited to relevant super-pixels . Thus , we offer improved means to understand the interplay between raw inputs and aleatoric/epistemic uncertainties in deep models . 2 UNCERTAINTY ATTRIBUTIONS . We focus our presentation on a classification task with a neural classifier f : Rn×W → ∆|C|−1 of a fixed architecture . The classifier maps feature vectors x ∈ Rn along with network weights w ∈ W to an element in the standard ( |C| − 1 ) -simplex , which represents membership probabilities across classes in a set C. On training f within an ( approximate ) Bayesian setting , we commonly obtain a posterior over the hypothesis space of models , i.e . a distribution π ( w|D ) over weights conditioned on the available train data D = { xi , ci } i=1,2 , .... Popular approaches to procure such posterior often differ in their approach to incorporate prior knowledge and include dropout ( Srivastava et al. , 2014 ) , Bayes-by-Backprop ( Blundell et al. , 2015 ) or SG-HMC ( Springenberg et al. , 2016 ) . A model score for classification with a new data point x⋆ ∈ Rn is derived from the posterior predictive distribution by marginalising over posterior weights , i.e . π ( x⋆|D ) = ∫ W f ( x⋆ , w ) π ( w|D ) dw = Ew|D [ f ( x⋆ , w ) ] , ( 1 ) and is easily approximated as 1N ∑N i=1 f ( x ⋆ , wi ) , with weight samples wi ∼ π ( w|D ) , i = 1 , . . . , N . In the following , we are concerned with the entropy as a measure of uncertainty : H ( x|D ) = − ∑ c∈C Ew|D [ fc ( x , w ) ] · logEw|D [ fc ( x , w ) ] ( 2 ) where fc ( x , w ) represents the probability of class-c membership . Remark . Concepts in this paper trivially extend to varied representations of uncertainty in classification and regression settings . Details are omitted for simplicity in the presentation . The entropy term in ( 2 ) may further be decomposed through the law of iterated variances ( Kendall & Gal , 2017 ) so as to yield an aleatoric term Ha ( x|D ) = Ew|D [ H ( x , w ) ] = − ∑ c∈C Ew|D [ fc ( x , w ) · log fc ( x , w ) ] , which measures the mean predictive entropy across models in the posterior hypothesis space , as well as the mutual information or epistemic term , He ( x|D ) = H ( x|D ) −Ha ( x|D ) that represents model uncertainty projected into the latent membership vector π ( x|D ) . Intuitively , aleatoric uncertainty represents natural stochastic variation in the observations over repeated experiments ; on the other hand , epistemic uncertainty is descriptive of model unknowns due to inadequate data or inappropriate modelling choices . 2.1 PATH INTEGRATED GRADIENTS . Path integrated gradients ( IG ) ( Sundararajan et al. , 2017 ) is a simple and popular method for importance attributions that differs from conventional feature removal and permutation techniques ( Covert et al. , 2020 ) , and is primarily targeted at image processing tasks . It is a practical and easy to implement alternative to layer-wise relevance propagation Montavon et al . ( 2019 ) or DeepLift ( Shrikumar et al. , 2017 ) ; it retains desired properties including sensitivity and implementation invariance , and has been extended to several adaptations ( Smilkov et al. , 2017 ; Jha et al. , 2020 ) . Given a classifier f ( · , w ) along with a feature vector x , path IG explains a model score f ( x , w ) using an alternative fiducial vector x0 as a reference , which is presumably not associated with any 1Source code for reproducing our results can be found at the following link : [ removed for blind review ] class observed in the training data . The attributed importance at index or pixel i is given by IGδi ( x|w ) = ∫ 1 0 ∂f ( δ ( α ) , w ) ∂δi ( α ) ∂δi ( α ) ∂α dα so that ∑ i IGi ( x|w ) = f ( x , w ) − f ( x0 , w ) . The result follows from the fundamental theorem of calculus for line integrals known as the gradient theorem . Here , δ : [ 0 , 1 ] → Rn represents a curve with endpoints at δ ( 0 ) = x0 and δ ( 1 ) = x . In vanilla IG , δ is parametrized as a straight path between fiducial and image feature vectors , i.e . δ ( α ) = x0 + α ( x− x0 ) , and the above simplifies to IGi ( x|w ) = ( xi − x0i ) × ∫ 1 0 ∂f ( x0 + α ( x− x0 ) , w ) ∂xi dα , so that importances are heavily influenced by differences in pixel values between x and x0 . However , a straight line often transitions the path x0 ⇝ x out-of-distribution or outside the datamanifold ( Jha et al. , 2020 ; Adebayo et al. , 2020 ) . Also , the fiducial choice is considered problematic ( Sundararajan et al. , 2017 ) and generally defaults to a black background . 2.2 INTEGRATED GRADIENTS WITH UNCERTAINTY . Commonly , the classifier f ( · , w ) is presumed to be binary ( Sundararajan et al. , 2017 ) with model scores constrained to the interval [ 0 , 1 ] . However , the above logic for importance attribution does easily generalize to multi-class Bayesian settings in the presence of uncertainty . Here , the posterior predictive classifier π ( x|D ) introduced in ( 1 ) accepts a path IG importance at index i given by IGδi ( x ) = ∫ 1 0 Ew|D [ ∂f ( δ ( α ) , w ) ∂δi ( α ) ] ∂δi ( α ) ∂α dα , which represents a mean-average trajectory over the curve δ and follows from dominated convergence . Most importantly , we may employ path IG to explain univariate measures of uncertainty from the posterior predictive distribution over classes , as observed in Figure 1 . Here , IG-Hδi ( x ) = − ∑ c∈C ∫ 1 0 ∆i ( α ) ∂δi ( α ) ∂α dα ( 3 ) captures variations in predictions covering multiple classes , and is defined s.t . ∆i ( α ) = ( 1 + logEw|D [ fc ( δ ( α ) , w ) ] ) · Ew|D [ ∂fc ( δ ( α ) , w ) ∂δi ( α ) ] attributes importances for the change in entropy between a fiducial point and a feature vector , and ∆i ( α ) = Ew|D [ ( 1 + log fc ( δ ( α ) , w ) ) · ∂fc ( δ ( α ) , w ) ∂δi ( α ) ] is the analogue representation restricted to the aleatoric term . Any variation in epistemic uncertainty is readily shown to be explained as the difference in importances between the above two terms . In Figure 1 , the goal is not to understand why the classifier suggests this picture refers to a dog ; instead , we comprehend why the model struggles to predict any single class with confidence , and we notice that the leash and a human hand are problematic . Further examples may be found in Appendix A ; in all cases , importances have been smoothed with a Gaussian filter , averaging over positive ( increasing uncertainty ) and negative ( decreasing uncertainty ) contributions . The attributions are easily computed by standard Bayesian procedures , approximating the inner expectations with simulations , however , the choice of fiducial ( black screen ) and out-of-distribution path remain controversial and a significant challenge in order to enable an intuitive understanding of predictive uncertainties in our model , which may represent a barrier in applications ( Antoran et al. , 2021 ) . 3 METHODOLOGY . Next , we describe the computational process summarized in Algorithm 1 , which produces novel in-distribution attributions of uncertainty for a feature vector x and predictive posterior π ( x|D ) in ( 1 ) . We do so through the use of a counterfactual fiducial bearing no relation to causal inference ( Pearl , 2010 ) . This counterfactual is an alternative vector x0 defined similarly to CLUEs in Antoran et al . ( 2021 ) , i.e . ( i ) in distribution and ( ii ) close to x according to some arbitrary distance metric . However , we furthermore require that the class distribution π ( x0|D ) bears close to 0 predictive uncertainty . Intuitively , we construct IG attributions using finely tuned fiducial points , by comparing ambiguous images to easily predicted counterparts that bear a significant resemblance . Algorithm 1 : Uncertainty attributions input : Feature vector x , predictive posterior π ( ·|D ) and uncertainty estimator H ( · ) . Distance metric d ( · , · ) , VAE encoder ϕ ( · ) and decoder ψ ( · ) . Penalty λ > > 0 and learning rate ν > 0. output : Uncertainty Attributions IG-Hi ( x ) , i = 1 , . . . , n. Initialise z0 = z = ϕµ ( x ) ; Compute predicted class ĉ = argmaxi πi ( x|D ) ; while L not converged do L ← d ( ψ ( z0 ) , x ) + 12m ∑ j z 2 j + λ log πĉ ( ψ ( z ) |D ) and z0 ← z0 − ν∇zL ; end while L not converged do L ← d ( ψ ( z ) , x ) + 12m ∑ j z 2 j and z ← z − ν∇zL ; end Approximate IG-Hδi ( x ) , i = 1 , . . . , n in ( 5 ) along δz0→z through trapezoidal integration . To begin with , we assume the existence of a generative variational auto-encoder ( VAE ) composed of an encoder ϕ : Rn → Rm and decoder ψ : Rm → Rn . As customary , the data-generating process in Rm is unit-Gaussian with an arbitrary dimensionality m < < n . | The authors propose an extension of the existing path integrated gradients (IG) approach for the attribution of uncertainties of Bayesian models for image classification tasks. IG constructs straight paths between a fiducial image with high predictive entropy and the given image and integrates the uncertainty contributions along this path. The authors argue that this method suffers from the limitation that the paths may pass through images outside the data mainfold, i.p., because the proper choice of fiducial image is unclear and thus often defaults to a black image. To alleviate these issues, the authors assume the existence of a variational autoencoder model trained on the dataset with which they aim to generate in-distribution paths: they define the path as the decoder image of a straight path in latent space, the endpoints of which are defined by optimization problems keeping both of them in-distribution (and the fiducial image close to a image of the same class with low predictive entropy). In experiments on MNIST, Fashion-MNIST, and CelebA the authors compare their method to existing approaches and argue that their method yields easier interpretable results. | SP:be6f343222debbb09755836c9adb2fab80f8c328 |
Path Integrals for the Attribution of Model Uncertainties | 1 INTRODUCTION . Estimating and understanding model uncertainties is of key importance in Bayesian inferential settings , which find applications in domains as diverse as natural language processing ( Xiao & Wang , 2019 ) , stochastic processes ( Rao & Teg , 2013 ) , network analysis ( Perez et al. , 2018 ) or image processing ( Kendall & Gal , 2017 ) , to name only a few . In contrast with model scores , model uncertainties manifest aspects of a system or data generating process that are not exactly known ( Hüllermeier & Waegeman , 2021 ) , and can be decomposed across aleatoric and epistemic components that help scrutinize different aspects in the functioning of a model , and can facilitate interpretability or fairness assessments in important machine learning applications ( Awasthi et al. , 2021 ) . Recently , there has been a growing interest in the study of methods for uncertainty estimation and decomposition ( e.g . Depeweg et al. , 2018 ; Smith & Gal , 2018 ; Van Amersfoort et al. , 2020 ; Tuna et al. , 2021 ) for purposes such as procuring adversarial examples , active learning or out-of-distribution detection . Most importantly , recent work has proposed counterfactual mechanisms for the interpretability of model uncertainties ( Van Looveren & Klaise , 2019 ; Antoran et al. , 2021 ; Schut et al. , 2021 ) , as well as their attribution to individual input features , such as pixels in an image . These methods proceed by identifying small adversarial or in-distribution variations in the raw input , s.t . predictive uncertainties in a model output are reduced . Then , attributions to individual pixels , words or categories are commonly assigned by direct comparison . This can facilitate the understanding of the strengths and weaknesses of varied probabilistic models , however , the optimization task to produce such counterfactuals requires a good balance between reducing uncertainties and minimising changes to original features , which is hard to achieve in practice . Most importantly , these methods do not satisfy commonly desired properties associated with modern importance attribution techniques ( Sundararajan et al. , 2017 ) , such as completeness or implementation invariance . In this paper , we present a novel framework for the attribution of predictive uncertainties , applicable to Bayesian differentiable models . We leverage path integrals ( Sundararajan et al. , 2017 ) along with in-distribution curves ( Jha et al. , 2020 ) , and we propose aggregating attributions over paths starting at a reference counterfactual which bears no predictive uncertainty . We ensure that completeness and additional desirable properties are satisfied , hence , uncertainties are completely explained by ( and decomposed over ) pixels in an image . We validate our approach by direct comparison with recently introduced counterfactual CLUE explanations ( Antoran et al. , 2021 ) , as well as popular interpretability methods , such as integrated gradients ( Sundararajan et al. , 2017 ) , LIME ( Ribeiro et al. , 2016 ) or kernelSHAP ( Lundberg & Lee , 2017 ) , which we adapt for the attribution of predictive uncertainties . Experiments1on benchmark image data sets show that , in comparison to competing alternatives , our proposed method offers sparse and easily interpretable attributions , always limited to relevant super-pixels . Thus , we offer improved means to understand the interplay between raw inputs and aleatoric/epistemic uncertainties in deep models . 2 UNCERTAINTY ATTRIBUTIONS . We focus our presentation on a classification task with a neural classifier f : Rn×W → ∆|C|−1 of a fixed architecture . The classifier maps feature vectors x ∈ Rn along with network weights w ∈ W to an element in the standard ( |C| − 1 ) -simplex , which represents membership probabilities across classes in a set C. On training f within an ( approximate ) Bayesian setting , we commonly obtain a posterior over the hypothesis space of models , i.e . a distribution π ( w|D ) over weights conditioned on the available train data D = { xi , ci } i=1,2 , .... Popular approaches to procure such posterior often differ in their approach to incorporate prior knowledge and include dropout ( Srivastava et al. , 2014 ) , Bayes-by-Backprop ( Blundell et al. , 2015 ) or SG-HMC ( Springenberg et al. , 2016 ) . A model score for classification with a new data point x⋆ ∈ Rn is derived from the posterior predictive distribution by marginalising over posterior weights , i.e . π ( x⋆|D ) = ∫ W f ( x⋆ , w ) π ( w|D ) dw = Ew|D [ f ( x⋆ , w ) ] , ( 1 ) and is easily approximated as 1N ∑N i=1 f ( x ⋆ , wi ) , with weight samples wi ∼ π ( w|D ) , i = 1 , . . . , N . In the following , we are concerned with the entropy as a measure of uncertainty : H ( x|D ) = − ∑ c∈C Ew|D [ fc ( x , w ) ] · logEw|D [ fc ( x , w ) ] ( 2 ) where fc ( x , w ) represents the probability of class-c membership . Remark . Concepts in this paper trivially extend to varied representations of uncertainty in classification and regression settings . Details are omitted for simplicity in the presentation . The entropy term in ( 2 ) may further be decomposed through the law of iterated variances ( Kendall & Gal , 2017 ) so as to yield an aleatoric term Ha ( x|D ) = Ew|D [ H ( x , w ) ] = − ∑ c∈C Ew|D [ fc ( x , w ) · log fc ( x , w ) ] , which measures the mean predictive entropy across models in the posterior hypothesis space , as well as the mutual information or epistemic term , He ( x|D ) = H ( x|D ) −Ha ( x|D ) that represents model uncertainty projected into the latent membership vector π ( x|D ) . Intuitively , aleatoric uncertainty represents natural stochastic variation in the observations over repeated experiments ; on the other hand , epistemic uncertainty is descriptive of model unknowns due to inadequate data or inappropriate modelling choices . 2.1 PATH INTEGRATED GRADIENTS . Path integrated gradients ( IG ) ( Sundararajan et al. , 2017 ) is a simple and popular method for importance attributions that differs from conventional feature removal and permutation techniques ( Covert et al. , 2020 ) , and is primarily targeted at image processing tasks . It is a practical and easy to implement alternative to layer-wise relevance propagation Montavon et al . ( 2019 ) or DeepLift ( Shrikumar et al. , 2017 ) ; it retains desired properties including sensitivity and implementation invariance , and has been extended to several adaptations ( Smilkov et al. , 2017 ; Jha et al. , 2020 ) . Given a classifier f ( · , w ) along with a feature vector x , path IG explains a model score f ( x , w ) using an alternative fiducial vector x0 as a reference , which is presumably not associated with any 1Source code for reproducing our results can be found at the following link : [ removed for blind review ] class observed in the training data . The attributed importance at index or pixel i is given by IGδi ( x|w ) = ∫ 1 0 ∂f ( δ ( α ) , w ) ∂δi ( α ) ∂δi ( α ) ∂α dα so that ∑ i IGi ( x|w ) = f ( x , w ) − f ( x0 , w ) . The result follows from the fundamental theorem of calculus for line integrals known as the gradient theorem . Here , δ : [ 0 , 1 ] → Rn represents a curve with endpoints at δ ( 0 ) = x0 and δ ( 1 ) = x . In vanilla IG , δ is parametrized as a straight path between fiducial and image feature vectors , i.e . δ ( α ) = x0 + α ( x− x0 ) , and the above simplifies to IGi ( x|w ) = ( xi − x0i ) × ∫ 1 0 ∂f ( x0 + α ( x− x0 ) , w ) ∂xi dα , so that importances are heavily influenced by differences in pixel values between x and x0 . However , a straight line often transitions the path x0 ⇝ x out-of-distribution or outside the datamanifold ( Jha et al. , 2020 ; Adebayo et al. , 2020 ) . Also , the fiducial choice is considered problematic ( Sundararajan et al. , 2017 ) and generally defaults to a black background . 2.2 INTEGRATED GRADIENTS WITH UNCERTAINTY . Commonly , the classifier f ( · , w ) is presumed to be binary ( Sundararajan et al. , 2017 ) with model scores constrained to the interval [ 0 , 1 ] . However , the above logic for importance attribution does easily generalize to multi-class Bayesian settings in the presence of uncertainty . Here , the posterior predictive classifier π ( x|D ) introduced in ( 1 ) accepts a path IG importance at index i given by IGδi ( x ) = ∫ 1 0 Ew|D [ ∂f ( δ ( α ) , w ) ∂δi ( α ) ] ∂δi ( α ) ∂α dα , which represents a mean-average trajectory over the curve δ and follows from dominated convergence . Most importantly , we may employ path IG to explain univariate measures of uncertainty from the posterior predictive distribution over classes , as observed in Figure 1 . Here , IG-Hδi ( x ) = − ∑ c∈C ∫ 1 0 ∆i ( α ) ∂δi ( α ) ∂α dα ( 3 ) captures variations in predictions covering multiple classes , and is defined s.t . ∆i ( α ) = ( 1 + logEw|D [ fc ( δ ( α ) , w ) ] ) · Ew|D [ ∂fc ( δ ( α ) , w ) ∂δi ( α ) ] attributes importances for the change in entropy between a fiducial point and a feature vector , and ∆i ( α ) = Ew|D [ ( 1 + log fc ( δ ( α ) , w ) ) · ∂fc ( δ ( α ) , w ) ∂δi ( α ) ] is the analogue representation restricted to the aleatoric term . Any variation in epistemic uncertainty is readily shown to be explained as the difference in importances between the above two terms . In Figure 1 , the goal is not to understand why the classifier suggests this picture refers to a dog ; instead , we comprehend why the model struggles to predict any single class with confidence , and we notice that the leash and a human hand are problematic . Further examples may be found in Appendix A ; in all cases , importances have been smoothed with a Gaussian filter , averaging over positive ( increasing uncertainty ) and negative ( decreasing uncertainty ) contributions . The attributions are easily computed by standard Bayesian procedures , approximating the inner expectations with simulations , however , the choice of fiducial ( black screen ) and out-of-distribution path remain controversial and a significant challenge in order to enable an intuitive understanding of predictive uncertainties in our model , which may represent a barrier in applications ( Antoran et al. , 2021 ) . 3 METHODOLOGY . Next , we describe the computational process summarized in Algorithm 1 , which produces novel in-distribution attributions of uncertainty for a feature vector x and predictive posterior π ( x|D ) in ( 1 ) . We do so through the use of a counterfactual fiducial bearing no relation to causal inference ( Pearl , 2010 ) . This counterfactual is an alternative vector x0 defined similarly to CLUEs in Antoran et al . ( 2021 ) , i.e . ( i ) in distribution and ( ii ) close to x according to some arbitrary distance metric . However , we furthermore require that the class distribution π ( x0|D ) bears close to 0 predictive uncertainty . Intuitively , we construct IG attributions using finely tuned fiducial points , by comparing ambiguous images to easily predicted counterparts that bear a significant resemblance . Algorithm 1 : Uncertainty attributions input : Feature vector x , predictive posterior π ( ·|D ) and uncertainty estimator H ( · ) . Distance metric d ( · , · ) , VAE encoder ϕ ( · ) and decoder ψ ( · ) . Penalty λ > > 0 and learning rate ν > 0. output : Uncertainty Attributions IG-Hi ( x ) , i = 1 , . . . , n. Initialise z0 = z = ϕµ ( x ) ; Compute predicted class ĉ = argmaxi πi ( x|D ) ; while L not converged do L ← d ( ψ ( z0 ) , x ) + 12m ∑ j z 2 j + λ log πĉ ( ψ ( z ) |D ) and z0 ← z0 − ν∇zL ; end while L not converged do L ← d ( ψ ( z ) , x ) + 12m ∑ j z 2 j and z ← z − ν∇zL ; end Approximate IG-Hδi ( x ) , i = 1 , . . . , n in ( 5 ) along δz0→z through trapezoidal integration . To begin with , we assume the existence of a generative variational auto-encoder ( VAE ) composed of an encoder ϕ : Rn → Rm and decoder ψ : Rm → Rn . As customary , the data-generating process in Rm is unit-Gaussian with an arbitrary dimensionality m < < n . | The authors propose a method to interpret model uncertainty for individual input examples. That is, it attempts to identify which input features (e.g. pixels of an image) are contributing to prediction uncertainty (measured as entropy in the multi-class prediction probabilities). To do this, for some query input, the proposed method applies integrated gradients between the input and a "fiducial" (i.e. high certainty) reference, which serves as a version of the input which is similar, but the network is very certain of its prediction class (the same prediction class as the query input). To find this fiducial reference, the method uses a pretrained variational autoencoder to find an input in the latent embedding space of high prediction certainty, that also lies closely on the latent embedding manifold. Integrated gradient interpolation is done linearly in this latent space. The authors show through several examples that the explanations for uncertainty are reasonable, and compare them to other methods like CLUE, LIME, and SHAP. | SP:be6f343222debbb09755836c9adb2fab80f8c328 |
Interpreting Molecule Generative Models for Interactive Molecule Discovery | 1 INTRODUCTION . Designing molecules with desired properties is a fundamental problem in chemistry , which has a variety of applications in drug discovery and material science ( Chen et al. , 2018 ) . Traditional pipelines require exhaustive human efforts and domain knowledge , which are difficult to scale up . Recent studies exploit deep generative models to solve this problem by encoding molecules into a latent space , from which random samples are drawn and decoded to novel molecules ( Walters & Barzilay , 2020 ) . It has been widely observed that such deep molecule generative models are able to facilitate the design and development of drugs and materials from many perspectives ( Lopez et al. , 2020 ; Sanchez-Lengeling & Aspuru-Guzik , 2018 ) . Despite the promising results of deep generative models for molecule generation , much less effort has been made to interpret the learned representations . Most of the existing models are based on deep neural networks , which are known to be short on interpretability ( Samek et al. , 2019 ) . Outside of molecule generation domain , many attempts have been made to improve the interpretability of deep learning models from various aspects , e.g. , representation space ( Zhou et al. , 2016 ) , model space ( Guo et al. , 2021 ) , and latent space ( Shen et al. , 2020 ; Shen & Zhou , 2021 ) . In molecule generation domain , interpretability can be studied in two ways : ( 1 ) the interpretation of learned latent space where steering the value of latent vectors could lead to smooth and continuous molecular property change and ( 2 ) the interpretation of molecular space that adjusting the molecular property could observe smooth structure change of molecules . In addition , it remains challenging to generate molecules with desired properties . Previous works mostly rely on optimization-based , reinforcement learning-based , and searching-based methods to achieve property control of the generated molecules ( Shi et al. , 2020 ; Jin et al. , 2018a ) . Specifically , reinforcement learning-based algorithm ( You et al. , 2018a ) equips the model with rewards designed to encourage the molecule generative models to generate molecules with specific molecular properties . Optimization-based algorithm takes advantage of the learnt latent space by molecule generative models and optimize the molecular properties via Bayesian Optimization ( Liu et al. , 2018 ) . Searching-based algorithm instead searches directly from the chemical space for molecules with optimal properties ( Kwon et al. , 2021 ) . However , these lines of work are designed for molecule generation with optimized property and thus unable to change the property monotonically and smoothly . Besides , current methods are confined to a limited number of molecular properties , which hinders real-world applications in drug discovery and material science . For example , existing work only discover a limited set of molecular properties , such as penalized logP ( octanol-water partition coefficient ) , QED ( Drug-likeness ) , DRD2 activity , etc ( Jin et al. , 2018a ; Shi et al. , 2020 ; Liu et al. , 2018 ; Fu et al. , 2020 ) . Consequently , when molecules with new properties are needed , the models must be re-trained with a different optimization goal , which is significantly time-consuming . To tackle the above challenges , we formulate a new task , molecule manipulation , which aims to improve the interpretability and steerability of a given molecule generative model via continuously manipulating molecular properties . Based on the observation that molecules sharing similar structures/properties tend to cluster in the latent space , we develop MolSpace Explorer , a model-agnostic method to manipulate molecules with continuous changes of molecular properties . Specifically , MolSpace Explorer first identifies the property separation hyperplane which defines the boundary for molecular properties ( e.g. , drug-like or drug-unlike ) in the latent molecular space learned by a given generative model . Based on the property separation hyperplane , we estimate the latent directions that govern molecular properties , which are in turn used to enable continuous change of the molecular structures and properties without re-training the given molecular generative model . To the best our knowledge , this work is one of the earliest attempts to achieve interactive molecule discovery through the steering of pretrained generative models . The experiments demonstrate that our method can effectively quantify the interpretability and steerability of state-of-the-art molecule generative models . To measure the ability of generative models in interpreting molecular properties and generating molecules with continuous property control , we design a new evaluation metric named success rate , which evaluates the percentage of successful manipulations with continuous property-changing molecules over manipulations of a group of molecules . To visualize our method and facilitate interactive molecule discovery for scientists , we develop an interactive system with visualization of real-time molecule manipulations and smooth molecular structure/property changes . Our main contributions are summarized as follows : • We formulate molecule manipulation , a new task which measures the interpretability and steerability of molecule generative models via the ability to manipulate the molecular properties of molecules in the latent space . • We develop a simple yet effective model-agnostic method named MolSpace Explorer for molecule manipulation , which further analyzes current molecule generative models in terms of their interpretability and steerability.terpretation . • Comprehensive experiments demonstrate the effectiveness of our method in quantifying the interpretability and steerability of various molecule generative models . An interactive system is developed for real-time molecule manipulation . 2 RELATED WORK . Molecule Generation . Recent studies have explored a variety of deep generative models for molecule generation . Specifically , GrammarVAE ( Kusner et al. , 2017 ) designs a variational autoencoder-based model that represents molecules as SMILE strings . With the advancement of graph neural networks ( GNN ) , a surge of GNN-based generative models have been proposed to tackle the problem , by combining GNN with variational autoencoders ( VAEs ) , generative adversarial networks ( GANs ) , normalizing flows , energy-based models ( EBMs ) , and reinforcement learning ( Olivecrona et al. , 2017 ; De Cao & Kipf , 2018 ; Jin et al. , 2018a ; Zhou et al. , 2019 ; Madhawa et al. , 2019 ; Shi et al. , 2020 ; Luo et al. , 2021 ; Liu et al. , 2021 ; Yang et al. , 2021 ) . To be specific , JT-VAE ( Jin et al. , 2018a ) proposes a VAE-based architecture to encode both atomic graphs and structural graphs for efficient molecule generation . MolGAN ( De Cao & Kipf , 2018 ) exploits GANs for molecule generation , where discriminators are used to encourage the model to generate realistic and chemically-valid molecules . MRNN ( Popova et al. , 2019 ) extends the idea of GraphRNN ( You et al. , 2018b ) to formulate molecule generation as an auto-regressive process . GCPN ( You et al. , 2018a ) formulates the molecule generation process as a reinforcement learning problem where it obtains a molecule step by step by connecting atoms and reward is used for controllable generation . GraphNVP ( Madhawa et al. , 2019 ) first introduces normalizing flows for molecule generation , where the generation process is invertible . Later work improve the flow-based models via autoregressive generation ( Shi et al. , 2020 ) , valency correction ( Zang & Wang , 2020 ) , and discrete latent representation ( Luo et al. , 2021 ) . GraphEBM ( Liu et al. , 2021 ) introduces energy-based models which models the density of molecule data . Controllable Molecule Generation . Another key point for molecule generation is controllable generation where the generated molecules are expected to possess certain properties . Early work ( Segler et al. , 2018 ) bias on the distribution of the data and fine-tune the generative models with known desired properties to generate molecules with desired properties . The recent work mainly leverage optimization-based ( Shi et al. , 2020 ; You et al. , 2018a ; Hoffman et al. , 2020 ; Winter et al. , 2019 ; ? ) , reinforcement learning-based ( Zang & Wang , 2020 ; Jin et al. , 2018a ; Blaschke et al. , 2020 ) , and searching-based ( Brown et al. , 2019 ; Yang et al. , 2020 ; Kwon et al. , 2021 ) approaches to generate molecules with desired properties . Optimization-based methods are quite flexible and can both directly work on the molecules ( Renz et al. , 2019 ; Fu et al. , 2020 ; Xie et al. , 2021 ; Maziarz et al. , 2021 ) and work on the learnt latent vectors of the molecules ( Gómez-Bombarelli et al. , 2018 ; Jin et al. , 2018b ; Winter et al. , 2019 ; Griffiths & Hernández-Lobato , 2020 ; Notin et al. , 2021 ) . Reinforcement learning-based methods usually formulates controllable generation as a sequential decision-making problem and requires a score-function to provide rewards to the agent . Searching-based approaches ( Brown et al. , 2019 ; Yang et al. , 2020 ; Kwon et al. , 2021 ) are also capable of searching molecules with optimized properties . Besides , few work ( Chenthamarakshan et al. , 2020 ; Das et al. , 2021 ) leverage the learnt latent space and achieve controllable generation by accepting/rejecting sampled molecules based on a molecular property predictor . Despite the ability to generate molecules with optimized properties , it is challenging for existing methods to interpret the generation process and can not generate molecules with monotonically and smoothly changing molecular properties . 3 PRELIMINARIES . Molecule Graph . Molecules can be presented as graphs X = ( V , E , E , F ) , where V denotes a set of N vertices ( i.e. , atoms ) , E ⊆ V × V denotes a set of edges ( i.e. , bonds ) , F ∈ { 0 , 1 } N×D denotes the node features ( i.e. , atom types ) and E ∈ { 0 , 1 } N×N×K denotes the edge features ( i.e. , bond types ) . The number of atom types and bond types are denoted by D and K , respectively . Deep Molecule Generative Models . In molecule generation , a generative model M encodes the molecular graph X as a latent vector Z ∈ Rl with l being the dimension of the latent space , and is capable of decoding any latent vector back to the molecular space . Specifically , variational autoencoder ( VAE ) ( Kingma & Welling , 2013 ) and flow-based model ( Flow ) ( Rezende & Mohamed , 2015 ) are the two most commonly used models for molecule generation tasks . Both of them encode the data from molecular space to latent space , which is usually modeled as a Gaussian distribution ; then they decode the latent code back to molecular space . They can be formulated as : z = f ( x ) , x′ = g ( z ) , ( 1 ) where x and x′ are the ground-truth and reconstructed/sampled data respectively , and z ∈ Z represents a latent vector in the latent space . 4 PROBLEM FORMULATION OF MOLECULE MANIPULATION . To improve the steerability and interpretability of molecule generative models , we propose a new research task , molecule manipulation , which interprets the generative model and steers the properties of the output molecules . To be specific , a deep generative model contains a generator g : Z → X , whereZ ∈ Rl stands for the l-dimensional latent space , which is commonly assumed to be Gaussian distribution ( Kingma & Welling , 2013 ; Rezende & Mohamed , 2015 ) . There exist property functions fP which define the property space P via P = fP ( X ) . drug-dislike mols Formulation . The input to molecule manipulation is a list of n molecules X = { x1 , x2 , · · · , xn } and a list of m molecular properties P = { p1 , p2 , · · · , pm } . We aim to manipulate one or more molecular properties p of a given molecule in a k consecutive steps and output the manipulated molecules with properties p′ = { p ( 1 ) , p ( 2 ) , · · · , p ( k ) } . By manipulating the given molecule , we can observe the alignment of Z → X → P , where the relationship between Z and X explains the latent space of molecule generative models . The relationship between X and P reveals the correlations between molecular structures and properties . By traversing latent space , we can generate molecules with continuous structure/property changes . Evaluation . For the molecule manipulation task , we design two new evaluation metrics named success rate ( SR ) and soft success rate ( SSR ) that measure the performance in discovering latent molecular property directions . To be specific , we consider a manipulation to be successful only if we generate molecules with monotonically-changing properties in a consecutive k steps of manipulation , as follows : φsuccess ( x , k ) = 1 [ ∀ i ∈ [ k ] , s.t. , fp ( x ( i ) ) − fp ( x ( i+1 ) ) ≤ 0 ] , ( 2 ) where fp is a property function which calculates certain molecular property and x ( i ) , x ( i+1 ) represents molecules generated in two adjacent steps . As monotonicity is rather strict , we propose a more flexible definition of success , namely soft success , as follows : φsoft success ( x , k ) = 1 [ ∀ i ∈ [ k ] , s.t. , fp ( x ( i ) ) − fp ( x ( i+1 ) ) ≤ ] , ( 3 ) where is a predefined tolerance threshold that weakens the monotonicity requirement . To extend the evaluation metric for |P | molecular properties and |X| candidate molecules to manipulate , we calculate the overall SR as : SR ( P , X , k ) = ∑ p∈P , x∈X 1 [ φsuccess ( xp , k ) ∧ φSD ( xp , k ) ∧ φDIV ( xp , k ) ] |P | × |X| , ( 4 ) where xp represents manipulating property p of molecule x which results in a manipulation path xp = { x ( i ) p |i ∈ [ k ] } . Since the molecular space is essentially discrete , we allow the model to generate duplicate molecules during manipulation , but the model has to generate at least one distinct molecule from the base molecule ( diversity or DIV ) and the structure difference ( SD ) enforces monotonically-decreasing structure similarity along the manipulation sequence , as follows : φDIV ( x , k ) = 1 [ ∃ i ∈ [ k ] , s.t. , x ( i ) 6= x ( 1 ) ] , ( 5 ) φSD ( x , k ) = 1 [ ∀ i ∈ [ k ] , s.t. , δ ( x ( i ) , x ( 1 ) ) − δ ( x ( i+1 ) , x ( 1 ) ) ≥ 0 ] , ( 6 ) where δ measures the structure similarity between the two input molecules . Similarly , we provide a soft structure difference ( SSD ) with a predefined tolerance threshold γ as follows : φSSD ( x , k ) = 1 [ ∀ i ∈ [ k ] , s.t. , δ ( x ( i ) , x ( 1 ) ) − δ ( x ( i+1 ) , x ( 1 ) ) ≥ γ ] , ( 7 ) Finally , we calculate the overall SSR as : SSR ( P , X , k ) = ∑ p∈P , x∈X 1 [ φsoft success ( xp , k ) ∧ φSSD ( xp , k ) ∧ φDIV ( xp , k ) ] |P | × |X| , ( 8 ) ‘ | This paper proposes to analyse latent space-based generative models of molecules by measuring how separable the latent space is with respect to several molecular properties. If a property is sufficiently separable, then it can be affected by moving along the normal vector of the separation plane. Experiments compare the streerability of several established generative models. | SP:820e8fe27c5f729ac8465d12dfb5e47f7379ee50 |
Interpreting Molecule Generative Models for Interactive Molecule Discovery | 1 INTRODUCTION . Designing molecules with desired properties is a fundamental problem in chemistry , which has a variety of applications in drug discovery and material science ( Chen et al. , 2018 ) . Traditional pipelines require exhaustive human efforts and domain knowledge , which are difficult to scale up . Recent studies exploit deep generative models to solve this problem by encoding molecules into a latent space , from which random samples are drawn and decoded to novel molecules ( Walters & Barzilay , 2020 ) . It has been widely observed that such deep molecule generative models are able to facilitate the design and development of drugs and materials from many perspectives ( Lopez et al. , 2020 ; Sanchez-Lengeling & Aspuru-Guzik , 2018 ) . Despite the promising results of deep generative models for molecule generation , much less effort has been made to interpret the learned representations . Most of the existing models are based on deep neural networks , which are known to be short on interpretability ( Samek et al. , 2019 ) . Outside of molecule generation domain , many attempts have been made to improve the interpretability of deep learning models from various aspects , e.g. , representation space ( Zhou et al. , 2016 ) , model space ( Guo et al. , 2021 ) , and latent space ( Shen et al. , 2020 ; Shen & Zhou , 2021 ) . In molecule generation domain , interpretability can be studied in two ways : ( 1 ) the interpretation of learned latent space where steering the value of latent vectors could lead to smooth and continuous molecular property change and ( 2 ) the interpretation of molecular space that adjusting the molecular property could observe smooth structure change of molecules . In addition , it remains challenging to generate molecules with desired properties . Previous works mostly rely on optimization-based , reinforcement learning-based , and searching-based methods to achieve property control of the generated molecules ( Shi et al. , 2020 ; Jin et al. , 2018a ) . Specifically , reinforcement learning-based algorithm ( You et al. , 2018a ) equips the model with rewards designed to encourage the molecule generative models to generate molecules with specific molecular properties . Optimization-based algorithm takes advantage of the learnt latent space by molecule generative models and optimize the molecular properties via Bayesian Optimization ( Liu et al. , 2018 ) . Searching-based algorithm instead searches directly from the chemical space for molecules with optimal properties ( Kwon et al. , 2021 ) . However , these lines of work are designed for molecule generation with optimized property and thus unable to change the property monotonically and smoothly . Besides , current methods are confined to a limited number of molecular properties , which hinders real-world applications in drug discovery and material science . For example , existing work only discover a limited set of molecular properties , such as penalized logP ( octanol-water partition coefficient ) , QED ( Drug-likeness ) , DRD2 activity , etc ( Jin et al. , 2018a ; Shi et al. , 2020 ; Liu et al. , 2018 ; Fu et al. , 2020 ) . Consequently , when molecules with new properties are needed , the models must be re-trained with a different optimization goal , which is significantly time-consuming . To tackle the above challenges , we formulate a new task , molecule manipulation , which aims to improve the interpretability and steerability of a given molecule generative model via continuously manipulating molecular properties . Based on the observation that molecules sharing similar structures/properties tend to cluster in the latent space , we develop MolSpace Explorer , a model-agnostic method to manipulate molecules with continuous changes of molecular properties . Specifically , MolSpace Explorer first identifies the property separation hyperplane which defines the boundary for molecular properties ( e.g. , drug-like or drug-unlike ) in the latent molecular space learned by a given generative model . Based on the property separation hyperplane , we estimate the latent directions that govern molecular properties , which are in turn used to enable continuous change of the molecular structures and properties without re-training the given molecular generative model . To the best our knowledge , this work is one of the earliest attempts to achieve interactive molecule discovery through the steering of pretrained generative models . The experiments demonstrate that our method can effectively quantify the interpretability and steerability of state-of-the-art molecule generative models . To measure the ability of generative models in interpreting molecular properties and generating molecules with continuous property control , we design a new evaluation metric named success rate , which evaluates the percentage of successful manipulations with continuous property-changing molecules over manipulations of a group of molecules . To visualize our method and facilitate interactive molecule discovery for scientists , we develop an interactive system with visualization of real-time molecule manipulations and smooth molecular structure/property changes . Our main contributions are summarized as follows : • We formulate molecule manipulation , a new task which measures the interpretability and steerability of molecule generative models via the ability to manipulate the molecular properties of molecules in the latent space . • We develop a simple yet effective model-agnostic method named MolSpace Explorer for molecule manipulation , which further analyzes current molecule generative models in terms of their interpretability and steerability.terpretation . • Comprehensive experiments demonstrate the effectiveness of our method in quantifying the interpretability and steerability of various molecule generative models . An interactive system is developed for real-time molecule manipulation . 2 RELATED WORK . Molecule Generation . Recent studies have explored a variety of deep generative models for molecule generation . Specifically , GrammarVAE ( Kusner et al. , 2017 ) designs a variational autoencoder-based model that represents molecules as SMILE strings . With the advancement of graph neural networks ( GNN ) , a surge of GNN-based generative models have been proposed to tackle the problem , by combining GNN with variational autoencoders ( VAEs ) , generative adversarial networks ( GANs ) , normalizing flows , energy-based models ( EBMs ) , and reinforcement learning ( Olivecrona et al. , 2017 ; De Cao & Kipf , 2018 ; Jin et al. , 2018a ; Zhou et al. , 2019 ; Madhawa et al. , 2019 ; Shi et al. , 2020 ; Luo et al. , 2021 ; Liu et al. , 2021 ; Yang et al. , 2021 ) . To be specific , JT-VAE ( Jin et al. , 2018a ) proposes a VAE-based architecture to encode both atomic graphs and structural graphs for efficient molecule generation . MolGAN ( De Cao & Kipf , 2018 ) exploits GANs for molecule generation , where discriminators are used to encourage the model to generate realistic and chemically-valid molecules . MRNN ( Popova et al. , 2019 ) extends the idea of GraphRNN ( You et al. , 2018b ) to formulate molecule generation as an auto-regressive process . GCPN ( You et al. , 2018a ) formulates the molecule generation process as a reinforcement learning problem where it obtains a molecule step by step by connecting atoms and reward is used for controllable generation . GraphNVP ( Madhawa et al. , 2019 ) first introduces normalizing flows for molecule generation , where the generation process is invertible . Later work improve the flow-based models via autoregressive generation ( Shi et al. , 2020 ) , valency correction ( Zang & Wang , 2020 ) , and discrete latent representation ( Luo et al. , 2021 ) . GraphEBM ( Liu et al. , 2021 ) introduces energy-based models which models the density of molecule data . Controllable Molecule Generation . Another key point for molecule generation is controllable generation where the generated molecules are expected to possess certain properties . Early work ( Segler et al. , 2018 ) bias on the distribution of the data and fine-tune the generative models with known desired properties to generate molecules with desired properties . The recent work mainly leverage optimization-based ( Shi et al. , 2020 ; You et al. , 2018a ; Hoffman et al. , 2020 ; Winter et al. , 2019 ; ? ) , reinforcement learning-based ( Zang & Wang , 2020 ; Jin et al. , 2018a ; Blaschke et al. , 2020 ) , and searching-based ( Brown et al. , 2019 ; Yang et al. , 2020 ; Kwon et al. , 2021 ) approaches to generate molecules with desired properties . Optimization-based methods are quite flexible and can both directly work on the molecules ( Renz et al. , 2019 ; Fu et al. , 2020 ; Xie et al. , 2021 ; Maziarz et al. , 2021 ) and work on the learnt latent vectors of the molecules ( Gómez-Bombarelli et al. , 2018 ; Jin et al. , 2018b ; Winter et al. , 2019 ; Griffiths & Hernández-Lobato , 2020 ; Notin et al. , 2021 ) . Reinforcement learning-based methods usually formulates controllable generation as a sequential decision-making problem and requires a score-function to provide rewards to the agent . Searching-based approaches ( Brown et al. , 2019 ; Yang et al. , 2020 ; Kwon et al. , 2021 ) are also capable of searching molecules with optimized properties . Besides , few work ( Chenthamarakshan et al. , 2020 ; Das et al. , 2021 ) leverage the learnt latent space and achieve controllable generation by accepting/rejecting sampled molecules based on a molecular property predictor . Despite the ability to generate molecules with optimized properties , it is challenging for existing methods to interpret the generation process and can not generate molecules with monotonically and smoothly changing molecular properties . 3 PRELIMINARIES . Molecule Graph . Molecules can be presented as graphs X = ( V , E , E , F ) , where V denotes a set of N vertices ( i.e. , atoms ) , E ⊆ V × V denotes a set of edges ( i.e. , bonds ) , F ∈ { 0 , 1 } N×D denotes the node features ( i.e. , atom types ) and E ∈ { 0 , 1 } N×N×K denotes the edge features ( i.e. , bond types ) . The number of atom types and bond types are denoted by D and K , respectively . Deep Molecule Generative Models . In molecule generation , a generative model M encodes the molecular graph X as a latent vector Z ∈ Rl with l being the dimension of the latent space , and is capable of decoding any latent vector back to the molecular space . Specifically , variational autoencoder ( VAE ) ( Kingma & Welling , 2013 ) and flow-based model ( Flow ) ( Rezende & Mohamed , 2015 ) are the two most commonly used models for molecule generation tasks . Both of them encode the data from molecular space to latent space , which is usually modeled as a Gaussian distribution ; then they decode the latent code back to molecular space . They can be formulated as : z = f ( x ) , x′ = g ( z ) , ( 1 ) where x and x′ are the ground-truth and reconstructed/sampled data respectively , and z ∈ Z represents a latent vector in the latent space . 4 PROBLEM FORMULATION OF MOLECULE MANIPULATION . To improve the steerability and interpretability of molecule generative models , we propose a new research task , molecule manipulation , which interprets the generative model and steers the properties of the output molecules . To be specific , a deep generative model contains a generator g : Z → X , whereZ ∈ Rl stands for the l-dimensional latent space , which is commonly assumed to be Gaussian distribution ( Kingma & Welling , 2013 ; Rezende & Mohamed , 2015 ) . There exist property functions fP which define the property space P via P = fP ( X ) . drug-dislike mols Formulation . The input to molecule manipulation is a list of n molecules X = { x1 , x2 , · · · , xn } and a list of m molecular properties P = { p1 , p2 , · · · , pm } . We aim to manipulate one or more molecular properties p of a given molecule in a k consecutive steps and output the manipulated molecules with properties p′ = { p ( 1 ) , p ( 2 ) , · · · , p ( k ) } . By manipulating the given molecule , we can observe the alignment of Z → X → P , where the relationship between Z and X explains the latent space of molecule generative models . The relationship between X and P reveals the correlations between molecular structures and properties . By traversing latent space , we can generate molecules with continuous structure/property changes . Evaluation . For the molecule manipulation task , we design two new evaluation metrics named success rate ( SR ) and soft success rate ( SSR ) that measure the performance in discovering latent molecular property directions . To be specific , we consider a manipulation to be successful only if we generate molecules with monotonically-changing properties in a consecutive k steps of manipulation , as follows : φsuccess ( x , k ) = 1 [ ∀ i ∈ [ k ] , s.t. , fp ( x ( i ) ) − fp ( x ( i+1 ) ) ≤ 0 ] , ( 2 ) where fp is a property function which calculates certain molecular property and x ( i ) , x ( i+1 ) represents molecules generated in two adjacent steps . As monotonicity is rather strict , we propose a more flexible definition of success , namely soft success , as follows : φsoft success ( x , k ) = 1 [ ∀ i ∈ [ k ] , s.t. , fp ( x ( i ) ) − fp ( x ( i+1 ) ) ≤ ] , ( 3 ) where is a predefined tolerance threshold that weakens the monotonicity requirement . To extend the evaluation metric for |P | molecular properties and |X| candidate molecules to manipulate , we calculate the overall SR as : SR ( P , X , k ) = ∑ p∈P , x∈X 1 [ φsuccess ( xp , k ) ∧ φSD ( xp , k ) ∧ φDIV ( xp , k ) ] |P | × |X| , ( 4 ) where xp represents manipulating property p of molecule x which results in a manipulation path xp = { x ( i ) p |i ∈ [ k ] } . Since the molecular space is essentially discrete , we allow the model to generate duplicate molecules during manipulation , but the model has to generate at least one distinct molecule from the base molecule ( diversity or DIV ) and the structure difference ( SD ) enforces monotonically-decreasing structure similarity along the manipulation sequence , as follows : φDIV ( x , k ) = 1 [ ∃ i ∈ [ k ] , s.t. , x ( i ) 6= x ( 1 ) ] , ( 5 ) φSD ( x , k ) = 1 [ ∀ i ∈ [ k ] , s.t. , δ ( x ( i ) , x ( 1 ) ) − δ ( x ( i+1 ) , x ( 1 ) ) ≥ 0 ] , ( 6 ) where δ measures the structure similarity between the two input molecules . Similarly , we provide a soft structure difference ( SSD ) with a predefined tolerance threshold γ as follows : φSSD ( x , k ) = 1 [ ∀ i ∈ [ k ] , s.t. , δ ( x ( i ) , x ( 1 ) ) − δ ( x ( i+1 ) , x ( 1 ) ) ≥ γ ] , ( 7 ) Finally , we calculate the overall SSR as : SSR ( P , X , k ) = ∑ p∈P , x∈X 1 [ φsoft success ( xp , k ) ∧ φSSD ( xp , k ) ∧ φDIV ( xp , k ) ] |P | × |X| , ( 8 ) ‘ | This paper proposes Molecular Space Explorer (MolSpacE), a method to generate molecules with continuously varying properties using a pre-trained latent variable generative model. It essentially involves 3 steps (although these steps were not clearly spelled out in the paper): 1. Sample many points from the model and evaluate their properties 2. For each property, train a SVM to predict the property given the the latent vector 3. Use the normal to this hyperplane as a "property manipulation" direction in latent space, which can be added a latent vector to change the property value of the decoded molecule. | SP:820e8fe27c5f729ac8465d12dfb5e47f7379ee50 |
Interpreting Molecule Generative Models for Interactive Molecule Discovery | 1 INTRODUCTION . Designing molecules with desired properties is a fundamental problem in chemistry , which has a variety of applications in drug discovery and material science ( Chen et al. , 2018 ) . Traditional pipelines require exhaustive human efforts and domain knowledge , which are difficult to scale up . Recent studies exploit deep generative models to solve this problem by encoding molecules into a latent space , from which random samples are drawn and decoded to novel molecules ( Walters & Barzilay , 2020 ) . It has been widely observed that such deep molecule generative models are able to facilitate the design and development of drugs and materials from many perspectives ( Lopez et al. , 2020 ; Sanchez-Lengeling & Aspuru-Guzik , 2018 ) . Despite the promising results of deep generative models for molecule generation , much less effort has been made to interpret the learned representations . Most of the existing models are based on deep neural networks , which are known to be short on interpretability ( Samek et al. , 2019 ) . Outside of molecule generation domain , many attempts have been made to improve the interpretability of deep learning models from various aspects , e.g. , representation space ( Zhou et al. , 2016 ) , model space ( Guo et al. , 2021 ) , and latent space ( Shen et al. , 2020 ; Shen & Zhou , 2021 ) . In molecule generation domain , interpretability can be studied in two ways : ( 1 ) the interpretation of learned latent space where steering the value of latent vectors could lead to smooth and continuous molecular property change and ( 2 ) the interpretation of molecular space that adjusting the molecular property could observe smooth structure change of molecules . In addition , it remains challenging to generate molecules with desired properties . Previous works mostly rely on optimization-based , reinforcement learning-based , and searching-based methods to achieve property control of the generated molecules ( Shi et al. , 2020 ; Jin et al. , 2018a ) . Specifically , reinforcement learning-based algorithm ( You et al. , 2018a ) equips the model with rewards designed to encourage the molecule generative models to generate molecules with specific molecular properties . Optimization-based algorithm takes advantage of the learnt latent space by molecule generative models and optimize the molecular properties via Bayesian Optimization ( Liu et al. , 2018 ) . Searching-based algorithm instead searches directly from the chemical space for molecules with optimal properties ( Kwon et al. , 2021 ) . However , these lines of work are designed for molecule generation with optimized property and thus unable to change the property monotonically and smoothly . Besides , current methods are confined to a limited number of molecular properties , which hinders real-world applications in drug discovery and material science . For example , existing work only discover a limited set of molecular properties , such as penalized logP ( octanol-water partition coefficient ) , QED ( Drug-likeness ) , DRD2 activity , etc ( Jin et al. , 2018a ; Shi et al. , 2020 ; Liu et al. , 2018 ; Fu et al. , 2020 ) . Consequently , when molecules with new properties are needed , the models must be re-trained with a different optimization goal , which is significantly time-consuming . To tackle the above challenges , we formulate a new task , molecule manipulation , which aims to improve the interpretability and steerability of a given molecule generative model via continuously manipulating molecular properties . Based on the observation that molecules sharing similar structures/properties tend to cluster in the latent space , we develop MolSpace Explorer , a model-agnostic method to manipulate molecules with continuous changes of molecular properties . Specifically , MolSpace Explorer first identifies the property separation hyperplane which defines the boundary for molecular properties ( e.g. , drug-like or drug-unlike ) in the latent molecular space learned by a given generative model . Based on the property separation hyperplane , we estimate the latent directions that govern molecular properties , which are in turn used to enable continuous change of the molecular structures and properties without re-training the given molecular generative model . To the best our knowledge , this work is one of the earliest attempts to achieve interactive molecule discovery through the steering of pretrained generative models . The experiments demonstrate that our method can effectively quantify the interpretability and steerability of state-of-the-art molecule generative models . To measure the ability of generative models in interpreting molecular properties and generating molecules with continuous property control , we design a new evaluation metric named success rate , which evaluates the percentage of successful manipulations with continuous property-changing molecules over manipulations of a group of molecules . To visualize our method and facilitate interactive molecule discovery for scientists , we develop an interactive system with visualization of real-time molecule manipulations and smooth molecular structure/property changes . Our main contributions are summarized as follows : • We formulate molecule manipulation , a new task which measures the interpretability and steerability of molecule generative models via the ability to manipulate the molecular properties of molecules in the latent space . • We develop a simple yet effective model-agnostic method named MolSpace Explorer for molecule manipulation , which further analyzes current molecule generative models in terms of their interpretability and steerability.terpretation . • Comprehensive experiments demonstrate the effectiveness of our method in quantifying the interpretability and steerability of various molecule generative models . An interactive system is developed for real-time molecule manipulation . 2 RELATED WORK . Molecule Generation . Recent studies have explored a variety of deep generative models for molecule generation . Specifically , GrammarVAE ( Kusner et al. , 2017 ) designs a variational autoencoder-based model that represents molecules as SMILE strings . With the advancement of graph neural networks ( GNN ) , a surge of GNN-based generative models have been proposed to tackle the problem , by combining GNN with variational autoencoders ( VAEs ) , generative adversarial networks ( GANs ) , normalizing flows , energy-based models ( EBMs ) , and reinforcement learning ( Olivecrona et al. , 2017 ; De Cao & Kipf , 2018 ; Jin et al. , 2018a ; Zhou et al. , 2019 ; Madhawa et al. , 2019 ; Shi et al. , 2020 ; Luo et al. , 2021 ; Liu et al. , 2021 ; Yang et al. , 2021 ) . To be specific , JT-VAE ( Jin et al. , 2018a ) proposes a VAE-based architecture to encode both atomic graphs and structural graphs for efficient molecule generation . MolGAN ( De Cao & Kipf , 2018 ) exploits GANs for molecule generation , where discriminators are used to encourage the model to generate realistic and chemically-valid molecules . MRNN ( Popova et al. , 2019 ) extends the idea of GraphRNN ( You et al. , 2018b ) to formulate molecule generation as an auto-regressive process . GCPN ( You et al. , 2018a ) formulates the molecule generation process as a reinforcement learning problem where it obtains a molecule step by step by connecting atoms and reward is used for controllable generation . GraphNVP ( Madhawa et al. , 2019 ) first introduces normalizing flows for molecule generation , where the generation process is invertible . Later work improve the flow-based models via autoregressive generation ( Shi et al. , 2020 ) , valency correction ( Zang & Wang , 2020 ) , and discrete latent representation ( Luo et al. , 2021 ) . GraphEBM ( Liu et al. , 2021 ) introduces energy-based models which models the density of molecule data . Controllable Molecule Generation . Another key point for molecule generation is controllable generation where the generated molecules are expected to possess certain properties . Early work ( Segler et al. , 2018 ) bias on the distribution of the data and fine-tune the generative models with known desired properties to generate molecules with desired properties . The recent work mainly leverage optimization-based ( Shi et al. , 2020 ; You et al. , 2018a ; Hoffman et al. , 2020 ; Winter et al. , 2019 ; ? ) , reinforcement learning-based ( Zang & Wang , 2020 ; Jin et al. , 2018a ; Blaschke et al. , 2020 ) , and searching-based ( Brown et al. , 2019 ; Yang et al. , 2020 ; Kwon et al. , 2021 ) approaches to generate molecules with desired properties . Optimization-based methods are quite flexible and can both directly work on the molecules ( Renz et al. , 2019 ; Fu et al. , 2020 ; Xie et al. , 2021 ; Maziarz et al. , 2021 ) and work on the learnt latent vectors of the molecules ( Gómez-Bombarelli et al. , 2018 ; Jin et al. , 2018b ; Winter et al. , 2019 ; Griffiths & Hernández-Lobato , 2020 ; Notin et al. , 2021 ) . Reinforcement learning-based methods usually formulates controllable generation as a sequential decision-making problem and requires a score-function to provide rewards to the agent . Searching-based approaches ( Brown et al. , 2019 ; Yang et al. , 2020 ; Kwon et al. , 2021 ) are also capable of searching molecules with optimized properties . Besides , few work ( Chenthamarakshan et al. , 2020 ; Das et al. , 2021 ) leverage the learnt latent space and achieve controllable generation by accepting/rejecting sampled molecules based on a molecular property predictor . Despite the ability to generate molecules with optimized properties , it is challenging for existing methods to interpret the generation process and can not generate molecules with monotonically and smoothly changing molecular properties . 3 PRELIMINARIES . Molecule Graph . Molecules can be presented as graphs X = ( V , E , E , F ) , where V denotes a set of N vertices ( i.e. , atoms ) , E ⊆ V × V denotes a set of edges ( i.e. , bonds ) , F ∈ { 0 , 1 } N×D denotes the node features ( i.e. , atom types ) and E ∈ { 0 , 1 } N×N×K denotes the edge features ( i.e. , bond types ) . The number of atom types and bond types are denoted by D and K , respectively . Deep Molecule Generative Models . In molecule generation , a generative model M encodes the molecular graph X as a latent vector Z ∈ Rl with l being the dimension of the latent space , and is capable of decoding any latent vector back to the molecular space . Specifically , variational autoencoder ( VAE ) ( Kingma & Welling , 2013 ) and flow-based model ( Flow ) ( Rezende & Mohamed , 2015 ) are the two most commonly used models for molecule generation tasks . Both of them encode the data from molecular space to latent space , which is usually modeled as a Gaussian distribution ; then they decode the latent code back to molecular space . They can be formulated as : z = f ( x ) , x′ = g ( z ) , ( 1 ) where x and x′ are the ground-truth and reconstructed/sampled data respectively , and z ∈ Z represents a latent vector in the latent space . 4 PROBLEM FORMULATION OF MOLECULE MANIPULATION . To improve the steerability and interpretability of molecule generative models , we propose a new research task , molecule manipulation , which interprets the generative model and steers the properties of the output molecules . To be specific , a deep generative model contains a generator g : Z → X , whereZ ∈ Rl stands for the l-dimensional latent space , which is commonly assumed to be Gaussian distribution ( Kingma & Welling , 2013 ; Rezende & Mohamed , 2015 ) . There exist property functions fP which define the property space P via P = fP ( X ) . drug-dislike mols Formulation . The input to molecule manipulation is a list of n molecules X = { x1 , x2 , · · · , xn } and a list of m molecular properties P = { p1 , p2 , · · · , pm } . We aim to manipulate one or more molecular properties p of a given molecule in a k consecutive steps and output the manipulated molecules with properties p′ = { p ( 1 ) , p ( 2 ) , · · · , p ( k ) } . By manipulating the given molecule , we can observe the alignment of Z → X → P , where the relationship between Z and X explains the latent space of molecule generative models . The relationship between X and P reveals the correlations between molecular structures and properties . By traversing latent space , we can generate molecules with continuous structure/property changes . Evaluation . For the molecule manipulation task , we design two new evaluation metrics named success rate ( SR ) and soft success rate ( SSR ) that measure the performance in discovering latent molecular property directions . To be specific , we consider a manipulation to be successful only if we generate molecules with monotonically-changing properties in a consecutive k steps of manipulation , as follows : φsuccess ( x , k ) = 1 [ ∀ i ∈ [ k ] , s.t. , fp ( x ( i ) ) − fp ( x ( i+1 ) ) ≤ 0 ] , ( 2 ) where fp is a property function which calculates certain molecular property and x ( i ) , x ( i+1 ) represents molecules generated in two adjacent steps . As monotonicity is rather strict , we propose a more flexible definition of success , namely soft success , as follows : φsoft success ( x , k ) = 1 [ ∀ i ∈ [ k ] , s.t. , fp ( x ( i ) ) − fp ( x ( i+1 ) ) ≤ ] , ( 3 ) where is a predefined tolerance threshold that weakens the monotonicity requirement . To extend the evaluation metric for |P | molecular properties and |X| candidate molecules to manipulate , we calculate the overall SR as : SR ( P , X , k ) = ∑ p∈P , x∈X 1 [ φsuccess ( xp , k ) ∧ φSD ( xp , k ) ∧ φDIV ( xp , k ) ] |P | × |X| , ( 4 ) where xp represents manipulating property p of molecule x which results in a manipulation path xp = { x ( i ) p |i ∈ [ k ] } . Since the molecular space is essentially discrete , we allow the model to generate duplicate molecules during manipulation , but the model has to generate at least one distinct molecule from the base molecule ( diversity or DIV ) and the structure difference ( SD ) enforces monotonically-decreasing structure similarity along the manipulation sequence , as follows : φDIV ( x , k ) = 1 [ ∃ i ∈ [ k ] , s.t. , x ( i ) 6= x ( 1 ) ] , ( 5 ) φSD ( x , k ) = 1 [ ∀ i ∈ [ k ] , s.t. , δ ( x ( i ) , x ( 1 ) ) − δ ( x ( i+1 ) , x ( 1 ) ) ≥ 0 ] , ( 6 ) where δ measures the structure similarity between the two input molecules . Similarly , we provide a soft structure difference ( SSD ) with a predefined tolerance threshold γ as follows : φSSD ( x , k ) = 1 [ ∀ i ∈ [ k ] , s.t. , δ ( x ( i ) , x ( 1 ) ) − δ ( x ( i+1 ) , x ( 1 ) ) ≥ γ ] , ( 7 ) Finally , we calculate the overall SSR as : SSR ( P , X , k ) = ∑ p∈P , x∈X 1 [ φsoft success ( xp , k ) ∧ φSSD ( xp , k ) ∧ φDIV ( xp , k ) ] |P | × |X| , ( 8 ) ‘ | This paper presents a method named "MolSpaceExplorer" that explores the latent space of a molecule generative model to continuously optimize molecules toward despred properties.They identify latent directions by constructing a serapation boundary (hyperplane) on the latent space and use the directions to improve latent vectors, which are then fed into the generatove model to produce new molecules with desired properties. The authors also developed an interface for interactive molecular discovery. The authors mentioned that the main contributions of this study are 1) this method is model-agnostic thus applicable to any molecule generative model 2) it does not require any retraining of the molecule generative model. | SP:820e8fe27c5f729ac8465d12dfb5e47f7379ee50 |
Contractive error feedback for gradient compression | 1 INTRODUCTION . Many modern machine learning tasks can be cast as an optimization problem min x∈Rd f ( x ) : = 1 N N∑ i=1 Eξ∼D [ fi ( x , ξ ) ] . ( 1 ) Typically , the possibly nonconvex loss function fi ( x , ξ ) is continuously differentiable with respect to ( w.r.t . ) x . We assume that f∗ > −∞ is a global minimum of ( 1 ) . In this work , we will focus on training neural networks ( NN ) by solving ( 1 ) in two important scenarios : i ) data parallelism in a multi-GPU setting , and ii ) federated learning where clients could be edge devices such as smart phones . In both cases , N denotes the number of workers ( GPUs or devices ) and f∗ is no smaller than 0 given common choices of loss functions such as cross entropy and mean square . While training NNs is a resource-hungry task , bandwidth and memory prevent us from fully harvesting the merits of parallelism . Limited bandwidth slows down the speed of information ( e.g. , gradients ) exchange among workers . In multi-GPU settings , it is observed that communication of gradients becomes a bottleneck that significantly drags down the training speed . While for federated learning , the system capacity is usually confined by communication latency between the server and workers . Memory constraints are less studied compared with bandwidth ( Sohoni et al. , 2019 ) . However , given the trend of an increasing model size in both computer vision and natural language processing , e.g. , ViT ( 0.6B ) ( Dosovitskiy et al. , 2020 ) , Megatron-LM ( 8.3B ) ( Shoeybi et al. , 2019 ) , T5 ( 11B ) ( Raffel et al. , 2019 ) , it evidences that memory is becoming an unavoidable constraint for training large models in current GPU generation with 16/32GB memory . For federated learning , it is also necessary to reduce the memory footprint on workers especially for those IoT applications . In this work , we will focus on the parsimonious setting , jointly dealing with limited bandwidth and memory . The priority is still communication efficiency for speeding up training time , otherwise one can simply rely on SGD with almost no additional memory consumption . Many existing works are unable to cope with such a challenging problem because memory is not carefully taken care of . Communication efficiency . A well-documented approach to alleviate communication bottleneck is to compress the ( stochastic ) gradients such that training can be made faster with reduced overhead ( Seide et al. , 2014 ; Alistarh et al. , 2017 ; Karimireddy et al. , 2019 ) . Depending on whether the gradient compressor is biased , these schemes are categorized as follows . Unbiased compressors maintain the unbiasedness of stochastic gradients at the price of enlarged variance . For example , uniform quantization of stochastic gradients is studied in QSGD ( Alistarh et al. , 2017 ) . A concurrent work ( Wen et al. , 2017 ) focuses on a special case of QSGD by quantizing each entry of the gradient into { ±1 , 0 } . Other variants include natural quantization ( Horvath et al. , 2019 ) , non-uniform quantization ( Ramezani-Kebrya et al. , 2021 ) and adaptive quantization ( Faghri et al. , 2020 ) . Another route to obtain an unbiased compressor is through ( scaled ) gradient sparsification ( Wangni et al. , 2018 ) or its generalized form , atomic decomposition ( Wang et al. , 2018 ) . However , such methods may be practically slow with performance degradation ( Vogels et al. , 2019 ) . Biased gradient compressors , on the other hand , have been more successful in practice since not only do they support a more impressive compression ratio , but also a test accuracy comparable with SGD can be obtained in many scenarios . Examples of such compressors contain top-k or ( unscaled ) random-k sparsification ( Lin et al. , 2018 ; Alistarh et al. , 2019 ) , signSGD ( Bernstein et al. , 2018a ; b ) , and PowerSGD ( Vogels et al. , 2019 ) . Due to the bias , such compressors typically rely on error feedback ( EF ) schemes to ensure convergence ( Stich et al. , 2018 ; Karimireddy et al. , 2019 ) . Memory concerns in communication efficient methods . Memory concerns are highly entangled with communication efficiency in distributed training . On the one hand , memory footprint can be alleviated through a smaller batchsize ( Krause et al. , 2016 ; Spring et al. , 2019 ) , leading to increased iterations as well as communication rounds per epoch . Hence , communication efficient methods are particularly useful for accelerating training time in such cases . On the other hand , existing communication efficient methods are challenged by limited memory due to several reasons . First , a smaller batch size , which enlarges the variance , degrades the applicability of unbiased gradient compressors . Confirmed in our experiments , the exploded variance typically calls for a small step size which decelerates convergence . Unbiased gradient compressors such as quantization , are also hindered by the need of AllGather ( Xu et al. , 2020 ) , since peak memory proportional to the number of workers is required . This renders them inefficient for the parsimonious setting , especially when the number of workers is large . Memory is also not handled well in methods with biased gradient compressors due to the need of error feedback , where additional memory the same as model size has to be consumed to keep track of accumulated compression error . Note that although no memory footprint is introduced in Local SGD ( Stich , 2019 ) , it suffers from variance blown problem as well . More importantly , local SGD can not be integrated with ZeRO 3 ( Rajbhandari et al. , 2020 ) , state-ofthe-art method for coping with limited memory , due to partitioned optimizer states among workers . In this work , we will focus on EF-based methods because of the capability of adopting a biased gradient compressor , which is typically contractive thus more robust to the gradient variance in a small batchsize setting . The additional memory consumption of EFSGD over SGD is the local error vector , which is delicate and critical for ensuring convergence . Meanwhile , the designed manner for memory efficiency can only come with lightweight and negligible runtime overhead , otherwise it violates the ultimate goal of communication efficient methods . Contractive error feedback is thus introduced to address these challenges . In a nutshell , our contributions can be summarized as : • To the best of our knowledge , this is the first work that systematically studies memory concerns in communication efficient methods . Contractive error feedback ( ConEF ) , is introduced to effectively manage memory given the preference of an allreducable and biased gradient compressor . • Convergence of ConEF is established . Our theoretical results suggest a tradeoff between memory efficiency and a faster convergence , and ConEF finds the sweet spot . • ConEF is capable of saving 80 % – 90 % additional memory of EFSGD on various learning problems such as image classification , language modeling , and machine translation . With almost the same runtime of EFSGD , ConEF achieves 1.3x – 5x speedup over SGD . Though test performance slightly drops on smaller models , the proposed method is more useful and reliable for larger networks where improved test accuracy over EFSGD is observed . Notation . Bold lowercase ( uppercase ) letters denote vectors ( matrices ) ; ‖x‖ stands for ` 2 norm of x ; and 〈x , y〉 denotes the inner product of x and y . In addition , we use [ x ] i ( [ X ] i , j ) to denote the i-th entry of vector x ( i , j-th entry of matrix X ) . 2 PRELIMINARIES . This section briefly recaps EFSGD ( Stich et al. , 2018 ; Karimireddy et al. , 2019 ) listed under Alg . 1 . We use git to denote the stochastic gradient at xt on worker i , and assume that the stochastic gradients are mutually independent among different workers per iteration . In line 5 , the scaled Algorithm 1 EFSGD 1 : Initialize : x0 ∈ Rd , ei0 = 0 ∈ Rd , ∀i , η 2 : for t = 0 , 1 , . . . , T − 1 do 3 : assert xt = xit for every worker i 4 : for worker i = 1 , . . . , N in parallel do 5 : pit = ηg i t + e i t 6 : ∆it = Q ( pit ) 7 : ∆t = Aggregate ( ∆it , ∀i ) 8 : xt+1 = xt −∆t 9 : eit+1 = p i t −∆it 10 : end for 11 : end for Algorithm 2 ConEF 1 : Initialize : x0 ∈ Rd , ei0 = 0 ∈ Rd , ∀i , η 2 : for t = 0 , 1 , . . . , T − 1 do 3 : assert xt = xit for every worker i 4 : for worker i = 1 , . . . , N in parallel do 5 : pit = ηg i t + e i t 6 : ∆it = Q ( pit ) 7 : ∆t = Aggregate ( ∆it , ∀i ) 8 : xt+1 = xt −∆t 9 : eit+1 = C ( pit −∆it ) 10 : end for 11 : end for stochastic gradient is augmented via accumulated compression error eit , then compressed and communicated . There is no restriction to the biasedness of gradient compressor Q . Examples of Q include ( scaled ) sign , random/top-k , and powerSGD . The “ aggregate ” in line 7 refers to ∆t = 1 N ∑ i ∆ i t , and the compression error e i t+1 is recomputed in line 9 after updating model xt . More implementation details on communication of gradients , and reasons for the preference of AllReduce , are deferred to Apdx . A.1 to save space . Besides the widely appreciated allreducable gradient compressors , those biased ones are more tailored for settings where the inclination is to rely on a smaller batchsize for memory saving . This is because biased gradient compressors are usually contractive ; see more details shortly in Assumption 4 . However , due to the need of error feedback to ensure convergence , the additional memory consumption of eit can confine the applicability of biased compressors as well as EFSGD in parsimonious setups , and this issue is somehow omitted by existing works . Other error feedback variants , e.g. , ( Wu et al. , 2018 ; Basu et al. , 2019 ; Xu et al. , 2021 ; Richtárik et al. , 2021 ) , also rely on additional vectors for the compression error , hence may benefit from the proposed technique as well . 3 MEMORY SAVING VIA CONTRACTIVE ERROR FEEDBACK . To endow EFSGD with memory efficiency , contractive error feedback is introduced in this section . The key idea is to apply another compressor C on the error vector eit such that the memory footprint can be mitigated . The proposed method is summarized in Alg . 2 , where the name contractive error feedback ( ConEF ) originates from the fact that the error vector is represented in its compressed format . Note that the encoding and decoding of compression in Alg . 2 are omitted for notational convenience . Q and C are adopted to denote the gradient and error compressors , respectively , to highlight their different roles . The gradient compressor Q is general and a biased one is more recommended for small batchsizes that can squeeze more memory out . An unbiased error compressor C is preferable since it is helpful for generalization as we shall discuss later in Section 4 . Moreover , the implementation of C has to be lightweight to avoid slowing down runtime . We will focus on theoretical properties first , and practical guidances for the choice of C are deferred to Section 4 . | The paper introduces a two-stage error correction procedure for approximated (doubly compressed) gradients. Multiple theorems are derived to prove convergence and scaling properties. The algorithm can be combined with any compression algorithm for the gradient. The algorithm is applied to CIFAR-10 with results showing favorable results. | SP:a3fe920d6dd7db5c3612f958c784868459dea8e8 |
Contractive error feedback for gradient compression | 1 INTRODUCTION . Many modern machine learning tasks can be cast as an optimization problem min x∈Rd f ( x ) : = 1 N N∑ i=1 Eξ∼D [ fi ( x , ξ ) ] . ( 1 ) Typically , the possibly nonconvex loss function fi ( x , ξ ) is continuously differentiable with respect to ( w.r.t . ) x . We assume that f∗ > −∞ is a global minimum of ( 1 ) . In this work , we will focus on training neural networks ( NN ) by solving ( 1 ) in two important scenarios : i ) data parallelism in a multi-GPU setting , and ii ) federated learning where clients could be edge devices such as smart phones . In both cases , N denotes the number of workers ( GPUs or devices ) and f∗ is no smaller than 0 given common choices of loss functions such as cross entropy and mean square . While training NNs is a resource-hungry task , bandwidth and memory prevent us from fully harvesting the merits of parallelism . Limited bandwidth slows down the speed of information ( e.g. , gradients ) exchange among workers . In multi-GPU settings , it is observed that communication of gradients becomes a bottleneck that significantly drags down the training speed . While for federated learning , the system capacity is usually confined by communication latency between the server and workers . Memory constraints are less studied compared with bandwidth ( Sohoni et al. , 2019 ) . However , given the trend of an increasing model size in both computer vision and natural language processing , e.g. , ViT ( 0.6B ) ( Dosovitskiy et al. , 2020 ) , Megatron-LM ( 8.3B ) ( Shoeybi et al. , 2019 ) , T5 ( 11B ) ( Raffel et al. , 2019 ) , it evidences that memory is becoming an unavoidable constraint for training large models in current GPU generation with 16/32GB memory . For federated learning , it is also necessary to reduce the memory footprint on workers especially for those IoT applications . In this work , we will focus on the parsimonious setting , jointly dealing with limited bandwidth and memory . The priority is still communication efficiency for speeding up training time , otherwise one can simply rely on SGD with almost no additional memory consumption . Many existing works are unable to cope with such a challenging problem because memory is not carefully taken care of . Communication efficiency . A well-documented approach to alleviate communication bottleneck is to compress the ( stochastic ) gradients such that training can be made faster with reduced overhead ( Seide et al. , 2014 ; Alistarh et al. , 2017 ; Karimireddy et al. , 2019 ) . Depending on whether the gradient compressor is biased , these schemes are categorized as follows . Unbiased compressors maintain the unbiasedness of stochastic gradients at the price of enlarged variance . For example , uniform quantization of stochastic gradients is studied in QSGD ( Alistarh et al. , 2017 ) . A concurrent work ( Wen et al. , 2017 ) focuses on a special case of QSGD by quantizing each entry of the gradient into { ±1 , 0 } . Other variants include natural quantization ( Horvath et al. , 2019 ) , non-uniform quantization ( Ramezani-Kebrya et al. , 2021 ) and adaptive quantization ( Faghri et al. , 2020 ) . Another route to obtain an unbiased compressor is through ( scaled ) gradient sparsification ( Wangni et al. , 2018 ) or its generalized form , atomic decomposition ( Wang et al. , 2018 ) . However , such methods may be practically slow with performance degradation ( Vogels et al. , 2019 ) . Biased gradient compressors , on the other hand , have been more successful in practice since not only do they support a more impressive compression ratio , but also a test accuracy comparable with SGD can be obtained in many scenarios . Examples of such compressors contain top-k or ( unscaled ) random-k sparsification ( Lin et al. , 2018 ; Alistarh et al. , 2019 ) , signSGD ( Bernstein et al. , 2018a ; b ) , and PowerSGD ( Vogels et al. , 2019 ) . Due to the bias , such compressors typically rely on error feedback ( EF ) schemes to ensure convergence ( Stich et al. , 2018 ; Karimireddy et al. , 2019 ) . Memory concerns in communication efficient methods . Memory concerns are highly entangled with communication efficiency in distributed training . On the one hand , memory footprint can be alleviated through a smaller batchsize ( Krause et al. , 2016 ; Spring et al. , 2019 ) , leading to increased iterations as well as communication rounds per epoch . Hence , communication efficient methods are particularly useful for accelerating training time in such cases . On the other hand , existing communication efficient methods are challenged by limited memory due to several reasons . First , a smaller batch size , which enlarges the variance , degrades the applicability of unbiased gradient compressors . Confirmed in our experiments , the exploded variance typically calls for a small step size which decelerates convergence . Unbiased gradient compressors such as quantization , are also hindered by the need of AllGather ( Xu et al. , 2020 ) , since peak memory proportional to the number of workers is required . This renders them inefficient for the parsimonious setting , especially when the number of workers is large . Memory is also not handled well in methods with biased gradient compressors due to the need of error feedback , where additional memory the same as model size has to be consumed to keep track of accumulated compression error . Note that although no memory footprint is introduced in Local SGD ( Stich , 2019 ) , it suffers from variance blown problem as well . More importantly , local SGD can not be integrated with ZeRO 3 ( Rajbhandari et al. , 2020 ) , state-ofthe-art method for coping with limited memory , due to partitioned optimizer states among workers . In this work , we will focus on EF-based methods because of the capability of adopting a biased gradient compressor , which is typically contractive thus more robust to the gradient variance in a small batchsize setting . The additional memory consumption of EFSGD over SGD is the local error vector , which is delicate and critical for ensuring convergence . Meanwhile , the designed manner for memory efficiency can only come with lightweight and negligible runtime overhead , otherwise it violates the ultimate goal of communication efficient methods . Contractive error feedback is thus introduced to address these challenges . In a nutshell , our contributions can be summarized as : • To the best of our knowledge , this is the first work that systematically studies memory concerns in communication efficient methods . Contractive error feedback ( ConEF ) , is introduced to effectively manage memory given the preference of an allreducable and biased gradient compressor . • Convergence of ConEF is established . Our theoretical results suggest a tradeoff between memory efficiency and a faster convergence , and ConEF finds the sweet spot . • ConEF is capable of saving 80 % – 90 % additional memory of EFSGD on various learning problems such as image classification , language modeling , and machine translation . With almost the same runtime of EFSGD , ConEF achieves 1.3x – 5x speedup over SGD . Though test performance slightly drops on smaller models , the proposed method is more useful and reliable for larger networks where improved test accuracy over EFSGD is observed . Notation . Bold lowercase ( uppercase ) letters denote vectors ( matrices ) ; ‖x‖ stands for ` 2 norm of x ; and 〈x , y〉 denotes the inner product of x and y . In addition , we use [ x ] i ( [ X ] i , j ) to denote the i-th entry of vector x ( i , j-th entry of matrix X ) . 2 PRELIMINARIES . This section briefly recaps EFSGD ( Stich et al. , 2018 ; Karimireddy et al. , 2019 ) listed under Alg . 1 . We use git to denote the stochastic gradient at xt on worker i , and assume that the stochastic gradients are mutually independent among different workers per iteration . In line 5 , the scaled Algorithm 1 EFSGD 1 : Initialize : x0 ∈ Rd , ei0 = 0 ∈ Rd , ∀i , η 2 : for t = 0 , 1 , . . . , T − 1 do 3 : assert xt = xit for every worker i 4 : for worker i = 1 , . . . , N in parallel do 5 : pit = ηg i t + e i t 6 : ∆it = Q ( pit ) 7 : ∆t = Aggregate ( ∆it , ∀i ) 8 : xt+1 = xt −∆t 9 : eit+1 = p i t −∆it 10 : end for 11 : end for Algorithm 2 ConEF 1 : Initialize : x0 ∈ Rd , ei0 = 0 ∈ Rd , ∀i , η 2 : for t = 0 , 1 , . . . , T − 1 do 3 : assert xt = xit for every worker i 4 : for worker i = 1 , . . . , N in parallel do 5 : pit = ηg i t + e i t 6 : ∆it = Q ( pit ) 7 : ∆t = Aggregate ( ∆it , ∀i ) 8 : xt+1 = xt −∆t 9 : eit+1 = C ( pit −∆it ) 10 : end for 11 : end for stochastic gradient is augmented via accumulated compression error eit , then compressed and communicated . There is no restriction to the biasedness of gradient compressor Q . Examples of Q include ( scaled ) sign , random/top-k , and powerSGD . The “ aggregate ” in line 7 refers to ∆t = 1 N ∑ i ∆ i t , and the compression error e i t+1 is recomputed in line 9 after updating model xt . More implementation details on communication of gradients , and reasons for the preference of AllReduce , are deferred to Apdx . A.1 to save space . Besides the widely appreciated allreducable gradient compressors , those biased ones are more tailored for settings where the inclination is to rely on a smaller batchsize for memory saving . This is because biased gradient compressors are usually contractive ; see more details shortly in Assumption 4 . However , due to the need of error feedback to ensure convergence , the additional memory consumption of eit can confine the applicability of biased compressors as well as EFSGD in parsimonious setups , and this issue is somehow omitted by existing works . Other error feedback variants , e.g. , ( Wu et al. , 2018 ; Basu et al. , 2019 ; Xu et al. , 2021 ; Richtárik et al. , 2021 ) , also rely on additional vectors for the compression error , hence may benefit from the proposed technique as well . 3 MEMORY SAVING VIA CONTRACTIVE ERROR FEEDBACK . To endow EFSGD with memory efficiency , contractive error feedback is introduced in this section . The key idea is to apply another compressor C on the error vector eit such that the memory footprint can be mitigated . The proposed method is summarized in Alg . 2 , where the name contractive error feedback ( ConEF ) originates from the fact that the error vector is represented in its compressed format . Note that the encoding and decoding of compression in Alg . 2 are omitted for notational convenience . Q and C are adopted to denote the gradient and error compressors , respectively , to highlight their different roles . The gradient compressor Q is general and a biased one is more recommended for small batchsizes that can squeeze more memory out . An unbiased error compressor C is preferable since it is helpful for generalization as we shall discuss later in Section 4 . Moreover , the implementation of C has to be lightweight to avoid slowing down runtime . We will focus on theoretical properties first , and practical guidances for the choice of C are deferred to Section 4 . | Motivated by the memory limitations of large-scale training, this paper proposes a new modification to the error-feedback algorithm for parallel optimization. In particular, the authors suggest compressing the error vector with a separate compressor, which results in the ConEF algorithm. Under the assumption that this compressor is unbiased and has compression ratio theta, the authors prove that the convergence rate has some terms of the same order as in error feedback and others get multiplied by a factor of theta. In addition, the authors propose the iConEF algorithms that introduce an additional error feedback for the extra compressor. This leads to yet another sequence of compressed vectors that need to be maintained. On the other hand, if theta<1 (or delta<1), the rate will become theta (or delta) times better. | SP:a3fe920d6dd7db5c3612f958c784868459dea8e8 |
Contractive error feedback for gradient compression | 1 INTRODUCTION . Many modern machine learning tasks can be cast as an optimization problem min x∈Rd f ( x ) : = 1 N N∑ i=1 Eξ∼D [ fi ( x , ξ ) ] . ( 1 ) Typically , the possibly nonconvex loss function fi ( x , ξ ) is continuously differentiable with respect to ( w.r.t . ) x . We assume that f∗ > −∞ is a global minimum of ( 1 ) . In this work , we will focus on training neural networks ( NN ) by solving ( 1 ) in two important scenarios : i ) data parallelism in a multi-GPU setting , and ii ) federated learning where clients could be edge devices such as smart phones . In both cases , N denotes the number of workers ( GPUs or devices ) and f∗ is no smaller than 0 given common choices of loss functions such as cross entropy and mean square . While training NNs is a resource-hungry task , bandwidth and memory prevent us from fully harvesting the merits of parallelism . Limited bandwidth slows down the speed of information ( e.g. , gradients ) exchange among workers . In multi-GPU settings , it is observed that communication of gradients becomes a bottleneck that significantly drags down the training speed . While for federated learning , the system capacity is usually confined by communication latency between the server and workers . Memory constraints are less studied compared with bandwidth ( Sohoni et al. , 2019 ) . However , given the trend of an increasing model size in both computer vision and natural language processing , e.g. , ViT ( 0.6B ) ( Dosovitskiy et al. , 2020 ) , Megatron-LM ( 8.3B ) ( Shoeybi et al. , 2019 ) , T5 ( 11B ) ( Raffel et al. , 2019 ) , it evidences that memory is becoming an unavoidable constraint for training large models in current GPU generation with 16/32GB memory . For federated learning , it is also necessary to reduce the memory footprint on workers especially for those IoT applications . In this work , we will focus on the parsimonious setting , jointly dealing with limited bandwidth and memory . The priority is still communication efficiency for speeding up training time , otherwise one can simply rely on SGD with almost no additional memory consumption . Many existing works are unable to cope with such a challenging problem because memory is not carefully taken care of . Communication efficiency . A well-documented approach to alleviate communication bottleneck is to compress the ( stochastic ) gradients such that training can be made faster with reduced overhead ( Seide et al. , 2014 ; Alistarh et al. , 2017 ; Karimireddy et al. , 2019 ) . Depending on whether the gradient compressor is biased , these schemes are categorized as follows . Unbiased compressors maintain the unbiasedness of stochastic gradients at the price of enlarged variance . For example , uniform quantization of stochastic gradients is studied in QSGD ( Alistarh et al. , 2017 ) . A concurrent work ( Wen et al. , 2017 ) focuses on a special case of QSGD by quantizing each entry of the gradient into { ±1 , 0 } . Other variants include natural quantization ( Horvath et al. , 2019 ) , non-uniform quantization ( Ramezani-Kebrya et al. , 2021 ) and adaptive quantization ( Faghri et al. , 2020 ) . Another route to obtain an unbiased compressor is through ( scaled ) gradient sparsification ( Wangni et al. , 2018 ) or its generalized form , atomic decomposition ( Wang et al. , 2018 ) . However , such methods may be practically slow with performance degradation ( Vogels et al. , 2019 ) . Biased gradient compressors , on the other hand , have been more successful in practice since not only do they support a more impressive compression ratio , but also a test accuracy comparable with SGD can be obtained in many scenarios . Examples of such compressors contain top-k or ( unscaled ) random-k sparsification ( Lin et al. , 2018 ; Alistarh et al. , 2019 ) , signSGD ( Bernstein et al. , 2018a ; b ) , and PowerSGD ( Vogels et al. , 2019 ) . Due to the bias , such compressors typically rely on error feedback ( EF ) schemes to ensure convergence ( Stich et al. , 2018 ; Karimireddy et al. , 2019 ) . Memory concerns in communication efficient methods . Memory concerns are highly entangled with communication efficiency in distributed training . On the one hand , memory footprint can be alleviated through a smaller batchsize ( Krause et al. , 2016 ; Spring et al. , 2019 ) , leading to increased iterations as well as communication rounds per epoch . Hence , communication efficient methods are particularly useful for accelerating training time in such cases . On the other hand , existing communication efficient methods are challenged by limited memory due to several reasons . First , a smaller batch size , which enlarges the variance , degrades the applicability of unbiased gradient compressors . Confirmed in our experiments , the exploded variance typically calls for a small step size which decelerates convergence . Unbiased gradient compressors such as quantization , are also hindered by the need of AllGather ( Xu et al. , 2020 ) , since peak memory proportional to the number of workers is required . This renders them inefficient for the parsimonious setting , especially when the number of workers is large . Memory is also not handled well in methods with biased gradient compressors due to the need of error feedback , where additional memory the same as model size has to be consumed to keep track of accumulated compression error . Note that although no memory footprint is introduced in Local SGD ( Stich , 2019 ) , it suffers from variance blown problem as well . More importantly , local SGD can not be integrated with ZeRO 3 ( Rajbhandari et al. , 2020 ) , state-ofthe-art method for coping with limited memory , due to partitioned optimizer states among workers . In this work , we will focus on EF-based methods because of the capability of adopting a biased gradient compressor , which is typically contractive thus more robust to the gradient variance in a small batchsize setting . The additional memory consumption of EFSGD over SGD is the local error vector , which is delicate and critical for ensuring convergence . Meanwhile , the designed manner for memory efficiency can only come with lightweight and negligible runtime overhead , otherwise it violates the ultimate goal of communication efficient methods . Contractive error feedback is thus introduced to address these challenges . In a nutshell , our contributions can be summarized as : • To the best of our knowledge , this is the first work that systematically studies memory concerns in communication efficient methods . Contractive error feedback ( ConEF ) , is introduced to effectively manage memory given the preference of an allreducable and biased gradient compressor . • Convergence of ConEF is established . Our theoretical results suggest a tradeoff between memory efficiency and a faster convergence , and ConEF finds the sweet spot . • ConEF is capable of saving 80 % – 90 % additional memory of EFSGD on various learning problems such as image classification , language modeling , and machine translation . With almost the same runtime of EFSGD , ConEF achieves 1.3x – 5x speedup over SGD . Though test performance slightly drops on smaller models , the proposed method is more useful and reliable for larger networks where improved test accuracy over EFSGD is observed . Notation . Bold lowercase ( uppercase ) letters denote vectors ( matrices ) ; ‖x‖ stands for ` 2 norm of x ; and 〈x , y〉 denotes the inner product of x and y . In addition , we use [ x ] i ( [ X ] i , j ) to denote the i-th entry of vector x ( i , j-th entry of matrix X ) . 2 PRELIMINARIES . This section briefly recaps EFSGD ( Stich et al. , 2018 ; Karimireddy et al. , 2019 ) listed under Alg . 1 . We use git to denote the stochastic gradient at xt on worker i , and assume that the stochastic gradients are mutually independent among different workers per iteration . In line 5 , the scaled Algorithm 1 EFSGD 1 : Initialize : x0 ∈ Rd , ei0 = 0 ∈ Rd , ∀i , η 2 : for t = 0 , 1 , . . . , T − 1 do 3 : assert xt = xit for every worker i 4 : for worker i = 1 , . . . , N in parallel do 5 : pit = ηg i t + e i t 6 : ∆it = Q ( pit ) 7 : ∆t = Aggregate ( ∆it , ∀i ) 8 : xt+1 = xt −∆t 9 : eit+1 = p i t −∆it 10 : end for 11 : end for Algorithm 2 ConEF 1 : Initialize : x0 ∈ Rd , ei0 = 0 ∈ Rd , ∀i , η 2 : for t = 0 , 1 , . . . , T − 1 do 3 : assert xt = xit for every worker i 4 : for worker i = 1 , . . . , N in parallel do 5 : pit = ηg i t + e i t 6 : ∆it = Q ( pit ) 7 : ∆t = Aggregate ( ∆it , ∀i ) 8 : xt+1 = xt −∆t 9 : eit+1 = C ( pit −∆it ) 10 : end for 11 : end for stochastic gradient is augmented via accumulated compression error eit , then compressed and communicated . There is no restriction to the biasedness of gradient compressor Q . Examples of Q include ( scaled ) sign , random/top-k , and powerSGD . The “ aggregate ” in line 7 refers to ∆t = 1 N ∑ i ∆ i t , and the compression error e i t+1 is recomputed in line 9 after updating model xt . More implementation details on communication of gradients , and reasons for the preference of AllReduce , are deferred to Apdx . A.1 to save space . Besides the widely appreciated allreducable gradient compressors , those biased ones are more tailored for settings where the inclination is to rely on a smaller batchsize for memory saving . This is because biased gradient compressors are usually contractive ; see more details shortly in Assumption 4 . However , due to the need of error feedback to ensure convergence , the additional memory consumption of eit can confine the applicability of biased compressors as well as EFSGD in parsimonious setups , and this issue is somehow omitted by existing works . Other error feedback variants , e.g. , ( Wu et al. , 2018 ; Basu et al. , 2019 ; Xu et al. , 2021 ; Richtárik et al. , 2021 ) , also rely on additional vectors for the compression error , hence may benefit from the proposed technique as well . 3 MEMORY SAVING VIA CONTRACTIVE ERROR FEEDBACK . To endow EFSGD with memory efficiency , contractive error feedback is introduced in this section . The key idea is to apply another compressor C on the error vector eit such that the memory footprint can be mitigated . The proposed method is summarized in Alg . 2 , where the name contractive error feedback ( ConEF ) originates from the fact that the error vector is represented in its compressed format . Note that the encoding and decoding of compression in Alg . 2 are omitted for notational convenience . Q and C are adopted to denote the gradient and error compressors , respectively , to highlight their different roles . The gradient compressor Q is general and a biased one is more recommended for small batchsizes that can squeeze more memory out . An unbiased error compressor C is preferable since it is helpful for generalization as we shall discuss later in Section 4 . Moreover , the implementation of C has to be lightweight to avoid slowing down runtime . We will focus on theoretical properties first , and practical guidances for the choice of C are deferred to Section 4 . | The authors propose to compress the local error in communication-efficient distributed training to reduce the memory to store the local error. The paper also proposes to compress the local error twice, which doubles the memory cost to store the local error but improves the convergence compared with compressing once. Experiments show the training performance of the proposed method with different compression ratio. But the settings are unrealistic and can not validate the claims. | SP:a3fe920d6dd7db5c3612f958c784868459dea8e8 |
DemoDICE: Offline Imitation Learning with Supplementary Imperfect Demonstrations | 1 INTRODUCTION . Reinforcement learning ( RL ) ( Sutton et al. , 1998 ) aims to learn a intelligent behavioral strategy based on reward feedback . Although RL has achieved remarkable success in many challenging domains , its practicality and applicability are still limited in two respects . First , we need to specify the reward function , which may be non-trivial to do so in many real-world problems that require complex decision making . Second , the standard RL setting assumes online interaction with the environment during the intermediate stages of learning , which is infeasible for mission-critical tasks . Imitation learning ( IL ) ( Pomerleau , 1991 ; Ng & Russell , 2000 ) addresses the first limitation of RL , where the agent is trained to mimic the expert from demonstration instead of specifying the reward function . It is well known that adopting supervised learning for training the imitating agent , commonly referred to as behavioral cloning ( BC ) , is vulnerable to the distribution drift ( Ross et al. , 2011 ) . Thus , most of the successful IL algorithms rely on online experiences collected from the environment by executing intermediate policies during training . Recent progress made by adversarial imitation learning ( AIL ) ( Ho & Ermon , 2016 ; Ke et al. , 2019 ; Kostrikov et al. , 2020 ) achieving stateof-the-art results on challenging imitation tasks still relies on such an online training paradigm . Unfortunately , in many realistic tasks such as robotic manipulation and autonomous driving , online interactions are either costly or dangerous . Offline RL ( Fujimoto et al. , 2019 ; Kumar et al. , 2019 ; 2020 ; Levine et al. , 2020 ; Wang et al. , 2020 ; Lee et al. , 2021 ; Kostrikov et al. , 2021 ) aims to address these concerns by training the agent from the pre-collected set of experiences without online interactions . To prevent an issue caused by the distributional shift , offline RL algorithms mitigate the phenomenon by constraining the shift or making a conservative evaluation of the policy being learned . In this paper , we are concerned with offline IL problems . Finding an effective algorithm for these problems is tricky . For instance , naively extending the offline RL algorithms ( which assume a reward function ) to the offline IL setting does not work . In practice , expert demonstrations are scarce due to the high cost of obtaining them . Thus , they typically cover only a small fraction of the state and action spaces , which in turn makes the distribution drift issue even more stand out compared with the standard offline RL setting with a reward function . We mitigate this issue by assuming a large number of supplementary imperfect demonstrations , without requiring any level of optimality for these imperfect demonstrations ; they may contain expert or near-expert trajectories ( Wu et al. , 2019 ; Wang et al. , 2021 ) as well as non-expert ones all together . This generality covers the situations from real-world applications , but at the same time , it poses a significant challenge for the design of a successful offline IL algorithm . In this paper , we propose DemoDICE , a novel model-free algorithm for offline IL from expert and imperfect demonstrations . We formulate an offline IL objective which not only mitigates distribution shift from the demonstration-data distribution but also naturally utilizes imperfect demonstrations . Our new formulation allows us to compute a closed form solution , which learns a policy in the space of stationary distributions but suffers from the instability issue in practice . We tackle the issue by proposing an alternative objective , which leads to a stable algorithm in practice while keeping the optimal stationary distribution . Finally , we introduce a method to extract the expert policy from a learned stationary distribution in a simple yet effective way . Our extensive evaluations show that DemoDICE achieves performance competitive to or better than a state-of-the-art off-policy IL algorithm in the offline-IL tasks with expert and imperfect demonstrations . 2 PRELIMINARIES . 2.1 MARKOV DECISION PROCESS ( MDP ) . We assume an environment modeled as a Markov Decision Process ( MDP ) , defined by tuple M = 〈S , A , T , R , p0 , γ〉 , where S is the set of states , A is the set of actions , T : S × A → ∆ ( S ) is the probability p ( st+1|st , at ) of making transition from state st to state st+1 by executing action at at timestep t , R : S × A → R is the reward function , p0 ∈ ∆ ( S ) is the distribution of the initial state s0 , and γ ∈ [ 0 , 1 ] is the discount factor . A policy π : S → ∆ ( A ) of MDP M is a mapping from states of M to distributions over actions . For the given policy π , the stationary distribution dπ is defined as follows : dπ ( s , a ) = ( 1− γ ) ∞∑ t=0 γtp ( st = s , at = a ∣∣s0 ∼ p0 ( · ) , st ∼ T ( ·|st−1 , at−1 ) , at ∼ π ( ·|st ) ) We assume a precollected dataset DE of ( s , a , s′ ) tuples generated by the expert , and a precollected imperfect dataset DI ( generated by unknown degrees of optimality ) . More precisely , for the ( underlying ) expert policy ’ s stationary distribution dE ( s , a ) , we assume that ( s , a , s′ ) ∈ DE is sampled as ( s , a ) ∼ dE , s′ ∼ T ( ·|s , a ) . We define DU = DE ∪ DI , the union of two datasets , and denote the corresponding stationary distribution of the dataset DU as dU . We denote the trajectories generated by expert policy as expert trajectories , and trajectories generated by non-expert policies as non-expert trajectories . The expert demonstrations consist only of expert trajectories and imperfect demonstrations consist of a mixture of expert and non-expert trajectories . In this paper , we assume that the quality of imperfect demonstrations is unknown . 2.2 IMITATION LEARNING . Behavior cloning ( BC ) is a classical IL approach , which attempts to find a function that maps s to a via supervised learning . The standard BC finds a policy π by minimizing the negative log-likelihood : min π JBC ( π ) : = min π − 1 |D| ∑ ( s , a ) ∈D log π ( a|s ) . ( 1 ) However , it is known to be brittle ( Ross et al. , 2011 ) when the interaction with the environment deviates from the scarce trajectories in DE . In such cases , BC fails to recover expert policies . One of the notable approaches for IL is to formulate the problem as distribution matching ( Ho & Ermon , 2016 ; Ke et al. , 2019 ; Kostrikov et al. , 2020 ) . When instantiated with the KL divergence widely used in previous IL works ( Ke et al. , 2019 ; Kostrikov et al. , 2020 ) , the approach amounts to finding a policy π by optimizing the following objective : max π −DKL ( dπ‖dE ) = E ( s , a ) ∼dπ [ log dE ( s , a ) dπ ( s , a ) ] . ( 2 ) Since we can not directly access the exact value of dE ( s , a ) and dπ ( s , a ) , rwe estimate their ratio using samples from dE ( s , a ) and dπ ( s , a ) , given as follows : max c : S×A→ ( 0,1 ) E ( s , a ) ∼dE [ log c ( s , a ) ] + E ( s , a ) ∼dπ [ log ( 1− c ( s , a ) ) ] . ( 3 ) Here , the optimal discriminator c∗ recovers log c∗ ( s , a ) − log ( 1 − c∗ ( s , a ) ) = log d E ( s , a ) dπ ( s , a ) . Based on this connection between generative adversarial networks ( GANs ) and IL , AIL algorithms focus on recovering the expert policy ( Ho & Ermon , 2016 ; Kostrikov et al. , 2019 ) . However , the agent obtain samples from dπ through interaction with environment , which is impossible in offline setting . Therefore , to tackle the offline IL problems , we should derive an alternative estimation without on-policy samples . 3 DEMODICE . In this section , we present a novel model-free offline IL algorithm named offline imitation learning using additional imperfect Demonstrations via stationary DIstribution Correction Estimation ( DemoDICE ) . Starting from a regularized offline IL objective which accords with offline RL algorithms , we present a formulation that does not require on-policy samples . Such formulation allows us to construct a nested optimization for offline IL from expert and imperfect demonstrations ( Section 3.1 ) . Then , we derive the closed-form solution to the sub-problem of the aforementioned optimization and obtain a simple convex optimization objective ( Section 3.2 ) . Since the objective is unstable in practice , we transform the objective to an alternative yet still convex objective ( Section 3.3 ) . Finally , we show how to extract the policy from the learned correction term ( Section 3.4 ) . 3.1 TRANSFORM CONSTRAINED OPTIMIZATION INTO NESTED OPTIMIZATION . In the context of offline RL , most works use expected return maximization with some regularization to overcome the extrapolation error in offline settings ( Fujimoto et al. , 2019 ; Kumar et al. , 2019 ; Nachum et al. , 2019b ; Kumar et al. , 2020 ; Lee et al. , 2021 ) . In this work , we use KL divergence minimization between dπ and dE with KL-regularization : π∗ : = arg max π −DKL ( dπ‖dE ) − αDKL ( dπ‖dU ) , ( 4 ) where α ≥ 0 is a hyperparameter that controls the balance between minimizing KL divergence with dE and preventing deviation of dπ from dU . Many online AIL algorithms estimate divergence between expert and current policy using on-policy samples , which is not available in offline scenario . In contrast , to construct a tractable optimization problem in the offline setting , we consider a problem equivalent to Equation 4 in terms of stationary distribution d : max d −DKL ( d‖dE ) − αDKL ( d‖dU ) ( 5 ) s.t ∑ a d ( s , a ) = ( 1− γ ) p0 ( s ) + γ ∑ s̄ , ā T ( s|s̄ , ā ) d ( s̄ , ā ) ∀s , ( 6 ) d ( s , a ) ≥ 0 ∀s , a . ( 7 ) The constraints ( 6-7 ) are called the Bellman flow constraints . The dual problem for the above constrained optimization problem is max d≥0 min ν −DKL ( d‖dE ) − αDKL ( d‖dU ) + ∑ s ν ( s ) ( ( 1− γ ) p0 ( s ) + γ ( T∗d ) ( s ) − ( B∗d ) ( s ) ) , ( 8 ) where ν ( s ) are the Lagrange multipliers , ( B∗d ) ( s ) : = ∑ a d ( s , a ) is the marginalization operator , and ( T∗d ) ( s ) : = ∑ s̄ , ā T ( s|s̄ , ā ) d ( s̄ , ā ) is the transposed Bellman operator . We introduce following derivations for the optimization ( 8 ) to obtain tractable optimization in the offline setting : −DKL ( d‖dE ) − αDKL ( d‖dU ) + ∑ s ν ( s ) ( ( 1− γ ) p0 ( s ) + γ ( T∗d ) ( s ) − ( B∗d ) ( s ) ) = ( 1− γ ) Es∼p0 [ ν ( s ) ] + E ( s , a ) ∼d [ γ ( T ν ) ( s , a ) − ν ( s ) − log d ( s , a ) dE ( s , a ) − α log d ( s , a ) dU ( s , a ) ] ( 9 ) = ( 1− γ ) Ep0 [ ν ( s ) ] + Ed [ γ ( T ν ) ( s , a ) − ν ( s ) + log d E ( s , a ) dU ( s , a ) ︸ ︷︷ ︸ : =r ( s , a ) − ( 1 + α ) log d ( s , a ) dU ( s , a ) ︸ ︷︷ ︸ : =w ( s , a ) ] , ( 10 ) the equality in Equation 9 holds from the following properties of transpose operators : ∑ s ν ( s ) ( B∗d ) ( s ) = ∑ s , a d ( s , a ) ( Bν ) ( s , a ) and ∑ s ν ( s ) ( T∗d ) ( s ) = ∑ s , a d ( s , a ) ( T ν ) ( s , a ) , where ( Bν ) ( s , a ) = ν ( s ) , ( T ν ) ( s , a ) = ∑ s′ T ( s ′|s , a ) ν ( s′ ) , with assumption dE ( s , a ) > 0 when d ( s , a ) > 0 ( Nachum et al. , 2019a ) . We introduce another log ratio , denoted by r ( s , a ) in Equation 10 to avoid using log d E ( s , a ) d ( s , a ) , which requires on-policy samples to estimate . Unlike log dE ( s , a ) d ( s , a ) , we can estimate r ( s , a ) in the offline setting using dE and dU , as we will discuss in the next section in detail . We change the distribution used in the expectation of Equation 10 from d to dU by following the standard trick of importance sampling as follows : ( 1− γ ) Es∼p0 [ ν ( s ) ] + E ( s , a ) ∼d [ r ( s , a ) + γ ( T ν ) ( s , a ) − ( Bν ) ( s , a ) ︸ ︷︷ ︸ : =Aν ( s , a ) ( advantage using ν ) − ( 1 + α ) logw ( s , a ) ] = ( 1− γ ) Es∼p0 [ ν ( s ) ] + E ( s , a ) ∼dU [ w ( s , a ) ( Aν ( s , a ) − ( 1 + α ) logw ( s , a ) ) ] = : L ( w , ν ; r ) . ( 11 ) As an alternative , one can convert expectation of d to dE instead of dU in Equation 10 by the similar application of the trick of importance sampling . ( 1− γ ) Es∼p0 [ ν ( s ) ] + E ( s , a ) ∼dE [ exp ( −r ( s , a ) ) w ( s , a ) ( Aν ( s , a ) − ( 1 + α ) logw ( s , a ) ) ] . In practice , due to the small number of demonstrations in dE , there may be a lack of diversity . To release this practical issue , we prefer to use dU instead of dE . In summary , DemoDICE solves the following maximin optimization : max w≥0 min ν L ( w , ν ; r ) , ( 12 ) where r is trained by using precollected datasets . Note that the solution w∗ of Equation 12 with the ground-truth ratio r is the ratio of two distributions , the stationary distribution dπ ∗ of the expert policy π∗ and the stationary distribution dU of union of expert and imperfect demonstrations . | The paper considers an offline imitation learning (IL) problem with an addition of supplementary imperfect demonstrations. To solve this problem, the paper proposes DemoDICE which regularizes a distribution-matching objective of IL by a KL divergence between the agent distribution and a mixed of expert and imperfect distributions. DemoDICE finds an optimal state-action distribution of this regularized objective by using a dual-program technique similar to that of OptiDICE (Lee et al., 2021) for offline RL with an improvement in terms of stability. Given the optimal state-action distribution, DemoDICE extracts the expert policy by performing weighted behavioral cloning. Empirical evaluation on Mujoco tasks with D4RL datasets show that DemoDICE can efficiently and effectively solve the offline IL problem. ### Contributions - A new learning problem combining offline IL and IL with imperfect demonstrations. - A new model-free offline IL method based on dual-program optimization. | SP:c393334d6cbca2bf420a2b4ff2f8318961bef188 |
DemoDICE: Offline Imitation Learning with Supplementary Imperfect Demonstrations | 1 INTRODUCTION . Reinforcement learning ( RL ) ( Sutton et al. , 1998 ) aims to learn a intelligent behavioral strategy based on reward feedback . Although RL has achieved remarkable success in many challenging domains , its practicality and applicability are still limited in two respects . First , we need to specify the reward function , which may be non-trivial to do so in many real-world problems that require complex decision making . Second , the standard RL setting assumes online interaction with the environment during the intermediate stages of learning , which is infeasible for mission-critical tasks . Imitation learning ( IL ) ( Pomerleau , 1991 ; Ng & Russell , 2000 ) addresses the first limitation of RL , where the agent is trained to mimic the expert from demonstration instead of specifying the reward function . It is well known that adopting supervised learning for training the imitating agent , commonly referred to as behavioral cloning ( BC ) , is vulnerable to the distribution drift ( Ross et al. , 2011 ) . Thus , most of the successful IL algorithms rely on online experiences collected from the environment by executing intermediate policies during training . Recent progress made by adversarial imitation learning ( AIL ) ( Ho & Ermon , 2016 ; Ke et al. , 2019 ; Kostrikov et al. , 2020 ) achieving stateof-the-art results on challenging imitation tasks still relies on such an online training paradigm . Unfortunately , in many realistic tasks such as robotic manipulation and autonomous driving , online interactions are either costly or dangerous . Offline RL ( Fujimoto et al. , 2019 ; Kumar et al. , 2019 ; 2020 ; Levine et al. , 2020 ; Wang et al. , 2020 ; Lee et al. , 2021 ; Kostrikov et al. , 2021 ) aims to address these concerns by training the agent from the pre-collected set of experiences without online interactions . To prevent an issue caused by the distributional shift , offline RL algorithms mitigate the phenomenon by constraining the shift or making a conservative evaluation of the policy being learned . In this paper , we are concerned with offline IL problems . Finding an effective algorithm for these problems is tricky . For instance , naively extending the offline RL algorithms ( which assume a reward function ) to the offline IL setting does not work . In practice , expert demonstrations are scarce due to the high cost of obtaining them . Thus , they typically cover only a small fraction of the state and action spaces , which in turn makes the distribution drift issue even more stand out compared with the standard offline RL setting with a reward function . We mitigate this issue by assuming a large number of supplementary imperfect demonstrations , without requiring any level of optimality for these imperfect demonstrations ; they may contain expert or near-expert trajectories ( Wu et al. , 2019 ; Wang et al. , 2021 ) as well as non-expert ones all together . This generality covers the situations from real-world applications , but at the same time , it poses a significant challenge for the design of a successful offline IL algorithm . In this paper , we propose DemoDICE , a novel model-free algorithm for offline IL from expert and imperfect demonstrations . We formulate an offline IL objective which not only mitigates distribution shift from the demonstration-data distribution but also naturally utilizes imperfect demonstrations . Our new formulation allows us to compute a closed form solution , which learns a policy in the space of stationary distributions but suffers from the instability issue in practice . We tackle the issue by proposing an alternative objective , which leads to a stable algorithm in practice while keeping the optimal stationary distribution . Finally , we introduce a method to extract the expert policy from a learned stationary distribution in a simple yet effective way . Our extensive evaluations show that DemoDICE achieves performance competitive to or better than a state-of-the-art off-policy IL algorithm in the offline-IL tasks with expert and imperfect demonstrations . 2 PRELIMINARIES . 2.1 MARKOV DECISION PROCESS ( MDP ) . We assume an environment modeled as a Markov Decision Process ( MDP ) , defined by tuple M = 〈S , A , T , R , p0 , γ〉 , where S is the set of states , A is the set of actions , T : S × A → ∆ ( S ) is the probability p ( st+1|st , at ) of making transition from state st to state st+1 by executing action at at timestep t , R : S × A → R is the reward function , p0 ∈ ∆ ( S ) is the distribution of the initial state s0 , and γ ∈ [ 0 , 1 ] is the discount factor . A policy π : S → ∆ ( A ) of MDP M is a mapping from states of M to distributions over actions . For the given policy π , the stationary distribution dπ is defined as follows : dπ ( s , a ) = ( 1− γ ) ∞∑ t=0 γtp ( st = s , at = a ∣∣s0 ∼ p0 ( · ) , st ∼ T ( ·|st−1 , at−1 ) , at ∼ π ( ·|st ) ) We assume a precollected dataset DE of ( s , a , s′ ) tuples generated by the expert , and a precollected imperfect dataset DI ( generated by unknown degrees of optimality ) . More precisely , for the ( underlying ) expert policy ’ s stationary distribution dE ( s , a ) , we assume that ( s , a , s′ ) ∈ DE is sampled as ( s , a ) ∼ dE , s′ ∼ T ( ·|s , a ) . We define DU = DE ∪ DI , the union of two datasets , and denote the corresponding stationary distribution of the dataset DU as dU . We denote the trajectories generated by expert policy as expert trajectories , and trajectories generated by non-expert policies as non-expert trajectories . The expert demonstrations consist only of expert trajectories and imperfect demonstrations consist of a mixture of expert and non-expert trajectories . In this paper , we assume that the quality of imperfect demonstrations is unknown . 2.2 IMITATION LEARNING . Behavior cloning ( BC ) is a classical IL approach , which attempts to find a function that maps s to a via supervised learning . The standard BC finds a policy π by minimizing the negative log-likelihood : min π JBC ( π ) : = min π − 1 |D| ∑ ( s , a ) ∈D log π ( a|s ) . ( 1 ) However , it is known to be brittle ( Ross et al. , 2011 ) when the interaction with the environment deviates from the scarce trajectories in DE . In such cases , BC fails to recover expert policies . One of the notable approaches for IL is to formulate the problem as distribution matching ( Ho & Ermon , 2016 ; Ke et al. , 2019 ; Kostrikov et al. , 2020 ) . When instantiated with the KL divergence widely used in previous IL works ( Ke et al. , 2019 ; Kostrikov et al. , 2020 ) , the approach amounts to finding a policy π by optimizing the following objective : max π −DKL ( dπ‖dE ) = E ( s , a ) ∼dπ [ log dE ( s , a ) dπ ( s , a ) ] . ( 2 ) Since we can not directly access the exact value of dE ( s , a ) and dπ ( s , a ) , rwe estimate their ratio using samples from dE ( s , a ) and dπ ( s , a ) , given as follows : max c : S×A→ ( 0,1 ) E ( s , a ) ∼dE [ log c ( s , a ) ] + E ( s , a ) ∼dπ [ log ( 1− c ( s , a ) ) ] . ( 3 ) Here , the optimal discriminator c∗ recovers log c∗ ( s , a ) − log ( 1 − c∗ ( s , a ) ) = log d E ( s , a ) dπ ( s , a ) . Based on this connection between generative adversarial networks ( GANs ) and IL , AIL algorithms focus on recovering the expert policy ( Ho & Ermon , 2016 ; Kostrikov et al. , 2019 ) . However , the agent obtain samples from dπ through interaction with environment , which is impossible in offline setting . Therefore , to tackle the offline IL problems , we should derive an alternative estimation without on-policy samples . 3 DEMODICE . In this section , we present a novel model-free offline IL algorithm named offline imitation learning using additional imperfect Demonstrations via stationary DIstribution Correction Estimation ( DemoDICE ) . Starting from a regularized offline IL objective which accords with offline RL algorithms , we present a formulation that does not require on-policy samples . Such formulation allows us to construct a nested optimization for offline IL from expert and imperfect demonstrations ( Section 3.1 ) . Then , we derive the closed-form solution to the sub-problem of the aforementioned optimization and obtain a simple convex optimization objective ( Section 3.2 ) . Since the objective is unstable in practice , we transform the objective to an alternative yet still convex objective ( Section 3.3 ) . Finally , we show how to extract the policy from the learned correction term ( Section 3.4 ) . 3.1 TRANSFORM CONSTRAINED OPTIMIZATION INTO NESTED OPTIMIZATION . In the context of offline RL , most works use expected return maximization with some regularization to overcome the extrapolation error in offline settings ( Fujimoto et al. , 2019 ; Kumar et al. , 2019 ; Nachum et al. , 2019b ; Kumar et al. , 2020 ; Lee et al. , 2021 ) . In this work , we use KL divergence minimization between dπ and dE with KL-regularization : π∗ : = arg max π −DKL ( dπ‖dE ) − αDKL ( dπ‖dU ) , ( 4 ) where α ≥ 0 is a hyperparameter that controls the balance between minimizing KL divergence with dE and preventing deviation of dπ from dU . Many online AIL algorithms estimate divergence between expert and current policy using on-policy samples , which is not available in offline scenario . In contrast , to construct a tractable optimization problem in the offline setting , we consider a problem equivalent to Equation 4 in terms of stationary distribution d : max d −DKL ( d‖dE ) − αDKL ( d‖dU ) ( 5 ) s.t ∑ a d ( s , a ) = ( 1− γ ) p0 ( s ) + γ ∑ s̄ , ā T ( s|s̄ , ā ) d ( s̄ , ā ) ∀s , ( 6 ) d ( s , a ) ≥ 0 ∀s , a . ( 7 ) The constraints ( 6-7 ) are called the Bellman flow constraints . The dual problem for the above constrained optimization problem is max d≥0 min ν −DKL ( d‖dE ) − αDKL ( d‖dU ) + ∑ s ν ( s ) ( ( 1− γ ) p0 ( s ) + γ ( T∗d ) ( s ) − ( B∗d ) ( s ) ) , ( 8 ) where ν ( s ) are the Lagrange multipliers , ( B∗d ) ( s ) : = ∑ a d ( s , a ) is the marginalization operator , and ( T∗d ) ( s ) : = ∑ s̄ , ā T ( s|s̄ , ā ) d ( s̄ , ā ) is the transposed Bellman operator . We introduce following derivations for the optimization ( 8 ) to obtain tractable optimization in the offline setting : −DKL ( d‖dE ) − αDKL ( d‖dU ) + ∑ s ν ( s ) ( ( 1− γ ) p0 ( s ) + γ ( T∗d ) ( s ) − ( B∗d ) ( s ) ) = ( 1− γ ) Es∼p0 [ ν ( s ) ] + E ( s , a ) ∼d [ γ ( T ν ) ( s , a ) − ν ( s ) − log d ( s , a ) dE ( s , a ) − α log d ( s , a ) dU ( s , a ) ] ( 9 ) = ( 1− γ ) Ep0 [ ν ( s ) ] + Ed [ γ ( T ν ) ( s , a ) − ν ( s ) + log d E ( s , a ) dU ( s , a ) ︸ ︷︷ ︸ : =r ( s , a ) − ( 1 + α ) log d ( s , a ) dU ( s , a ) ︸ ︷︷ ︸ : =w ( s , a ) ] , ( 10 ) the equality in Equation 9 holds from the following properties of transpose operators : ∑ s ν ( s ) ( B∗d ) ( s ) = ∑ s , a d ( s , a ) ( Bν ) ( s , a ) and ∑ s ν ( s ) ( T∗d ) ( s ) = ∑ s , a d ( s , a ) ( T ν ) ( s , a ) , where ( Bν ) ( s , a ) = ν ( s ) , ( T ν ) ( s , a ) = ∑ s′ T ( s ′|s , a ) ν ( s′ ) , with assumption dE ( s , a ) > 0 when d ( s , a ) > 0 ( Nachum et al. , 2019a ) . We introduce another log ratio , denoted by r ( s , a ) in Equation 10 to avoid using log d E ( s , a ) d ( s , a ) , which requires on-policy samples to estimate . Unlike log dE ( s , a ) d ( s , a ) , we can estimate r ( s , a ) in the offline setting using dE and dU , as we will discuss in the next section in detail . We change the distribution used in the expectation of Equation 10 from d to dU by following the standard trick of importance sampling as follows : ( 1− γ ) Es∼p0 [ ν ( s ) ] + E ( s , a ) ∼d [ r ( s , a ) + γ ( T ν ) ( s , a ) − ( Bν ) ( s , a ) ︸ ︷︷ ︸ : =Aν ( s , a ) ( advantage using ν ) − ( 1 + α ) logw ( s , a ) ] = ( 1− γ ) Es∼p0 [ ν ( s ) ] + E ( s , a ) ∼dU [ w ( s , a ) ( Aν ( s , a ) − ( 1 + α ) logw ( s , a ) ) ] = : L ( w , ν ; r ) . ( 11 ) As an alternative , one can convert expectation of d to dE instead of dU in Equation 10 by the similar application of the trick of importance sampling . ( 1− γ ) Es∼p0 [ ν ( s ) ] + E ( s , a ) ∼dE [ exp ( −r ( s , a ) ) w ( s , a ) ( Aν ( s , a ) − ( 1 + α ) logw ( s , a ) ) ] . In practice , due to the small number of demonstrations in dE , there may be a lack of diversity . To release this practical issue , we prefer to use dU instead of dE . In summary , DemoDICE solves the following maximin optimization : max w≥0 min ν L ( w , ν ; r ) , ( 12 ) where r is trained by using precollected datasets . Note that the solution w∗ of Equation 12 with the ground-truth ratio r is the ratio of two distributions , the stationary distribution dπ ∗ of the expert policy π∗ and the stationary distribution dU of union of expert and imperfect demonstrations . | This work presents an approach that learns a policy from expert demonstrations data in an offline setting. It is trying to solve an important problem in imitation learning: the expert demonstration may not cover the entire state/action space pretty well, and many existing algorithms require further interaction with the environment. To solve the problem of distribution shift when expert data is not diverse enough, the method proposes to introduce a large number of supplementary demonstrations of various qualities. In this way, the proposed algorithm DemoDICE does not require any on-policy samples. The method adapts the OptiDICE method for its own purpose by using a transformed objective function. The theoretical analysis and empirical experiments demonstrate the improvement of the method over BC and ValueDICE. | SP:c393334d6cbca2bf420a2b4ff2f8318961bef188 |
DemoDICE: Offline Imitation Learning with Supplementary Imperfect Demonstrations | 1 INTRODUCTION . Reinforcement learning ( RL ) ( Sutton et al. , 1998 ) aims to learn a intelligent behavioral strategy based on reward feedback . Although RL has achieved remarkable success in many challenging domains , its practicality and applicability are still limited in two respects . First , we need to specify the reward function , which may be non-trivial to do so in many real-world problems that require complex decision making . Second , the standard RL setting assumes online interaction with the environment during the intermediate stages of learning , which is infeasible for mission-critical tasks . Imitation learning ( IL ) ( Pomerleau , 1991 ; Ng & Russell , 2000 ) addresses the first limitation of RL , where the agent is trained to mimic the expert from demonstration instead of specifying the reward function . It is well known that adopting supervised learning for training the imitating agent , commonly referred to as behavioral cloning ( BC ) , is vulnerable to the distribution drift ( Ross et al. , 2011 ) . Thus , most of the successful IL algorithms rely on online experiences collected from the environment by executing intermediate policies during training . Recent progress made by adversarial imitation learning ( AIL ) ( Ho & Ermon , 2016 ; Ke et al. , 2019 ; Kostrikov et al. , 2020 ) achieving stateof-the-art results on challenging imitation tasks still relies on such an online training paradigm . Unfortunately , in many realistic tasks such as robotic manipulation and autonomous driving , online interactions are either costly or dangerous . Offline RL ( Fujimoto et al. , 2019 ; Kumar et al. , 2019 ; 2020 ; Levine et al. , 2020 ; Wang et al. , 2020 ; Lee et al. , 2021 ; Kostrikov et al. , 2021 ) aims to address these concerns by training the agent from the pre-collected set of experiences without online interactions . To prevent an issue caused by the distributional shift , offline RL algorithms mitigate the phenomenon by constraining the shift or making a conservative evaluation of the policy being learned . In this paper , we are concerned with offline IL problems . Finding an effective algorithm for these problems is tricky . For instance , naively extending the offline RL algorithms ( which assume a reward function ) to the offline IL setting does not work . In practice , expert demonstrations are scarce due to the high cost of obtaining them . Thus , they typically cover only a small fraction of the state and action spaces , which in turn makes the distribution drift issue even more stand out compared with the standard offline RL setting with a reward function . We mitigate this issue by assuming a large number of supplementary imperfect demonstrations , without requiring any level of optimality for these imperfect demonstrations ; they may contain expert or near-expert trajectories ( Wu et al. , 2019 ; Wang et al. , 2021 ) as well as non-expert ones all together . This generality covers the situations from real-world applications , but at the same time , it poses a significant challenge for the design of a successful offline IL algorithm . In this paper , we propose DemoDICE , a novel model-free algorithm for offline IL from expert and imperfect demonstrations . We formulate an offline IL objective which not only mitigates distribution shift from the demonstration-data distribution but also naturally utilizes imperfect demonstrations . Our new formulation allows us to compute a closed form solution , which learns a policy in the space of stationary distributions but suffers from the instability issue in practice . We tackle the issue by proposing an alternative objective , which leads to a stable algorithm in practice while keeping the optimal stationary distribution . Finally , we introduce a method to extract the expert policy from a learned stationary distribution in a simple yet effective way . Our extensive evaluations show that DemoDICE achieves performance competitive to or better than a state-of-the-art off-policy IL algorithm in the offline-IL tasks with expert and imperfect demonstrations . 2 PRELIMINARIES . 2.1 MARKOV DECISION PROCESS ( MDP ) . We assume an environment modeled as a Markov Decision Process ( MDP ) , defined by tuple M = 〈S , A , T , R , p0 , γ〉 , where S is the set of states , A is the set of actions , T : S × A → ∆ ( S ) is the probability p ( st+1|st , at ) of making transition from state st to state st+1 by executing action at at timestep t , R : S × A → R is the reward function , p0 ∈ ∆ ( S ) is the distribution of the initial state s0 , and γ ∈ [ 0 , 1 ] is the discount factor . A policy π : S → ∆ ( A ) of MDP M is a mapping from states of M to distributions over actions . For the given policy π , the stationary distribution dπ is defined as follows : dπ ( s , a ) = ( 1− γ ) ∞∑ t=0 γtp ( st = s , at = a ∣∣s0 ∼ p0 ( · ) , st ∼ T ( ·|st−1 , at−1 ) , at ∼ π ( ·|st ) ) We assume a precollected dataset DE of ( s , a , s′ ) tuples generated by the expert , and a precollected imperfect dataset DI ( generated by unknown degrees of optimality ) . More precisely , for the ( underlying ) expert policy ’ s stationary distribution dE ( s , a ) , we assume that ( s , a , s′ ) ∈ DE is sampled as ( s , a ) ∼ dE , s′ ∼ T ( ·|s , a ) . We define DU = DE ∪ DI , the union of two datasets , and denote the corresponding stationary distribution of the dataset DU as dU . We denote the trajectories generated by expert policy as expert trajectories , and trajectories generated by non-expert policies as non-expert trajectories . The expert demonstrations consist only of expert trajectories and imperfect demonstrations consist of a mixture of expert and non-expert trajectories . In this paper , we assume that the quality of imperfect demonstrations is unknown . 2.2 IMITATION LEARNING . Behavior cloning ( BC ) is a classical IL approach , which attempts to find a function that maps s to a via supervised learning . The standard BC finds a policy π by minimizing the negative log-likelihood : min π JBC ( π ) : = min π − 1 |D| ∑ ( s , a ) ∈D log π ( a|s ) . ( 1 ) However , it is known to be brittle ( Ross et al. , 2011 ) when the interaction with the environment deviates from the scarce trajectories in DE . In such cases , BC fails to recover expert policies . One of the notable approaches for IL is to formulate the problem as distribution matching ( Ho & Ermon , 2016 ; Ke et al. , 2019 ; Kostrikov et al. , 2020 ) . When instantiated with the KL divergence widely used in previous IL works ( Ke et al. , 2019 ; Kostrikov et al. , 2020 ) , the approach amounts to finding a policy π by optimizing the following objective : max π −DKL ( dπ‖dE ) = E ( s , a ) ∼dπ [ log dE ( s , a ) dπ ( s , a ) ] . ( 2 ) Since we can not directly access the exact value of dE ( s , a ) and dπ ( s , a ) , rwe estimate their ratio using samples from dE ( s , a ) and dπ ( s , a ) , given as follows : max c : S×A→ ( 0,1 ) E ( s , a ) ∼dE [ log c ( s , a ) ] + E ( s , a ) ∼dπ [ log ( 1− c ( s , a ) ) ] . ( 3 ) Here , the optimal discriminator c∗ recovers log c∗ ( s , a ) − log ( 1 − c∗ ( s , a ) ) = log d E ( s , a ) dπ ( s , a ) . Based on this connection between generative adversarial networks ( GANs ) and IL , AIL algorithms focus on recovering the expert policy ( Ho & Ermon , 2016 ; Kostrikov et al. , 2019 ) . However , the agent obtain samples from dπ through interaction with environment , which is impossible in offline setting . Therefore , to tackle the offline IL problems , we should derive an alternative estimation without on-policy samples . 3 DEMODICE . In this section , we present a novel model-free offline IL algorithm named offline imitation learning using additional imperfect Demonstrations via stationary DIstribution Correction Estimation ( DemoDICE ) . Starting from a regularized offline IL objective which accords with offline RL algorithms , we present a formulation that does not require on-policy samples . Such formulation allows us to construct a nested optimization for offline IL from expert and imperfect demonstrations ( Section 3.1 ) . Then , we derive the closed-form solution to the sub-problem of the aforementioned optimization and obtain a simple convex optimization objective ( Section 3.2 ) . Since the objective is unstable in practice , we transform the objective to an alternative yet still convex objective ( Section 3.3 ) . Finally , we show how to extract the policy from the learned correction term ( Section 3.4 ) . 3.1 TRANSFORM CONSTRAINED OPTIMIZATION INTO NESTED OPTIMIZATION . In the context of offline RL , most works use expected return maximization with some regularization to overcome the extrapolation error in offline settings ( Fujimoto et al. , 2019 ; Kumar et al. , 2019 ; Nachum et al. , 2019b ; Kumar et al. , 2020 ; Lee et al. , 2021 ) . In this work , we use KL divergence minimization between dπ and dE with KL-regularization : π∗ : = arg max π −DKL ( dπ‖dE ) − αDKL ( dπ‖dU ) , ( 4 ) where α ≥ 0 is a hyperparameter that controls the balance between minimizing KL divergence with dE and preventing deviation of dπ from dU . Many online AIL algorithms estimate divergence between expert and current policy using on-policy samples , which is not available in offline scenario . In contrast , to construct a tractable optimization problem in the offline setting , we consider a problem equivalent to Equation 4 in terms of stationary distribution d : max d −DKL ( d‖dE ) − αDKL ( d‖dU ) ( 5 ) s.t ∑ a d ( s , a ) = ( 1− γ ) p0 ( s ) + γ ∑ s̄ , ā T ( s|s̄ , ā ) d ( s̄ , ā ) ∀s , ( 6 ) d ( s , a ) ≥ 0 ∀s , a . ( 7 ) The constraints ( 6-7 ) are called the Bellman flow constraints . The dual problem for the above constrained optimization problem is max d≥0 min ν −DKL ( d‖dE ) − αDKL ( d‖dU ) + ∑ s ν ( s ) ( ( 1− γ ) p0 ( s ) + γ ( T∗d ) ( s ) − ( B∗d ) ( s ) ) , ( 8 ) where ν ( s ) are the Lagrange multipliers , ( B∗d ) ( s ) : = ∑ a d ( s , a ) is the marginalization operator , and ( T∗d ) ( s ) : = ∑ s̄ , ā T ( s|s̄ , ā ) d ( s̄ , ā ) is the transposed Bellman operator . We introduce following derivations for the optimization ( 8 ) to obtain tractable optimization in the offline setting : −DKL ( d‖dE ) − αDKL ( d‖dU ) + ∑ s ν ( s ) ( ( 1− γ ) p0 ( s ) + γ ( T∗d ) ( s ) − ( B∗d ) ( s ) ) = ( 1− γ ) Es∼p0 [ ν ( s ) ] + E ( s , a ) ∼d [ γ ( T ν ) ( s , a ) − ν ( s ) − log d ( s , a ) dE ( s , a ) − α log d ( s , a ) dU ( s , a ) ] ( 9 ) = ( 1− γ ) Ep0 [ ν ( s ) ] + Ed [ γ ( T ν ) ( s , a ) − ν ( s ) + log d E ( s , a ) dU ( s , a ) ︸ ︷︷ ︸ : =r ( s , a ) − ( 1 + α ) log d ( s , a ) dU ( s , a ) ︸ ︷︷ ︸ : =w ( s , a ) ] , ( 10 ) the equality in Equation 9 holds from the following properties of transpose operators : ∑ s ν ( s ) ( B∗d ) ( s ) = ∑ s , a d ( s , a ) ( Bν ) ( s , a ) and ∑ s ν ( s ) ( T∗d ) ( s ) = ∑ s , a d ( s , a ) ( T ν ) ( s , a ) , where ( Bν ) ( s , a ) = ν ( s ) , ( T ν ) ( s , a ) = ∑ s′ T ( s ′|s , a ) ν ( s′ ) , with assumption dE ( s , a ) > 0 when d ( s , a ) > 0 ( Nachum et al. , 2019a ) . We introduce another log ratio , denoted by r ( s , a ) in Equation 10 to avoid using log d E ( s , a ) d ( s , a ) , which requires on-policy samples to estimate . Unlike log dE ( s , a ) d ( s , a ) , we can estimate r ( s , a ) in the offline setting using dE and dU , as we will discuss in the next section in detail . We change the distribution used in the expectation of Equation 10 from d to dU by following the standard trick of importance sampling as follows : ( 1− γ ) Es∼p0 [ ν ( s ) ] + E ( s , a ) ∼d [ r ( s , a ) + γ ( T ν ) ( s , a ) − ( Bν ) ( s , a ) ︸ ︷︷ ︸ : =Aν ( s , a ) ( advantage using ν ) − ( 1 + α ) logw ( s , a ) ] = ( 1− γ ) Es∼p0 [ ν ( s ) ] + E ( s , a ) ∼dU [ w ( s , a ) ( Aν ( s , a ) − ( 1 + α ) logw ( s , a ) ) ] = : L ( w , ν ; r ) . ( 11 ) As an alternative , one can convert expectation of d to dE instead of dU in Equation 10 by the similar application of the trick of importance sampling . ( 1− γ ) Es∼p0 [ ν ( s ) ] + E ( s , a ) ∼dE [ exp ( −r ( s , a ) ) w ( s , a ) ( Aν ( s , a ) − ( 1 + α ) logw ( s , a ) ) ] . In practice , due to the small number of demonstrations in dE , there may be a lack of diversity . To release this practical issue , we prefer to use dU instead of dE . In summary , DemoDICE solves the following maximin optimization : max w≥0 min ν L ( w , ν ; r ) , ( 12 ) where r is trained by using precollected datasets . Note that the solution w∗ of Equation 12 with the ground-truth ratio r is the ratio of two distributions , the stationary distribution dπ ∗ of the expert policy π∗ and the stationary distribution dU of union of expert and imperfect demonstrations . | The paper considers the problem of imitation learning, and specifically the setting where a small number of expert demonstrations is paired with some amount of suboptimal data. To tackle this setting, the paper proposes to optimize a convex constrained optimization, with linear variables denoting d^pi, linear constraints expressing the MDP transitions, and an objective composed of a linear combination of KL from the expert and KL from the suboptimal data. The paper presents derivations transforming this convex constrained optimization to a practical, unconstrained objective. The final algorithm is composed of three stages: (1) train a discriminator to get a density ratio estimator; (2) use the density ratios as a reward in a separate log-expected-exp objective (similar to REPS or ValueDICE objectives) to learn a different density ratio w; (3) extract a policy from w using weighted max likelihood training on the suboptimal data. The algorithm is evaluated on a variety of simulated robotics (mujoco) environments, and shows favorable results compared to baselines. | SP:c393334d6cbca2bf420a2b4ff2f8318961bef188 |
A Johnson-Lindenstrauss Framework for Randomly Initialized CNNs | 1 INTRODUCTION . Neural networks have become a standard tool in multiple scientific fields , due to their success in classification ( and estimation ) tasks . Conceptually , this success is achieved since better representation is allowed by each subsequent layer until linear separability is achieved at the last ( linear ) layer . Indeed , in many disciplines involving real-world tasks , such as computer vision and natural language processing , the training process is biased toward these favorable representations . This bias is a product of several factors , with the neural-network initialization playing a pivotal role ( Sutskever et al. , 2013 ) . Therefore , we concentrate in this work on studying the initialization of neural networks , with the following question guiding this work . How does the geometric representation of a dataset change after the application of each randomly initialized layer of a neural network ? To answer this , we study how the following two geometrical quantities change after each layer . 〈x , y〉 The scalar inner product between vectors x and y.1 ρ : = 〈x , y〉‖x‖‖y‖ The cosine similarity ( or simply similarity ) between vectors x and y . The similarity ρ ∈ [ −1 , 1 ] between x and y equals ρ = cos ( θ ) , where θ is the angle between them . Consider first one layer of a fully connected neural network ( FNN ) with an identity activation ( linear FNN ) that is initialized by independent identically distributed ( i.i.d . ) Gaussian weights with mean zero and variance 1/N , where N is the number of neurons in that layer . This random linear FNN induces an isometric embedding of the dataset , namely , the similarity ρin between any two inputs , xin and yin , is preserved together with their norm : ρout ≈ ρ̄out = ρin , where ρout is the similarity between the resulting random ( due to the multiplication by the random weights ) outputs , Xout and Yout ( respectively ) , and ρ̄out is the mean output similarity defined by 1We mean here a vector in a wider sense : x and y may be matrices and tensors ( of the same dimensions ) . In this situation , the standard inner product is equal to the vectorization thereof : 〈x , y〉 = 〈vec ( x ) , vec ( y ) 〉 . ρout : = 〈Xout , Yout〉 ‖Xout‖ ‖Yout‖ , and ρ̄out : = E [ 〈Xout , Yout〉 ] √ E [ ‖Xout‖2 ] E [ ‖Yout‖2 ] . ( 1 ) The proof of this isometric relation between the input and output similarities follows from the celebrated Johnson–Lindenstrauss lemma ( Johnson and Lindenstrauss , 1984 ; Dasgupta and Gupta , 1999 ) . This lemma states that a random linear map of dimension N preserves the distance between any two points up to an ǫ > 0 contraction/expansion with probability at least 1 − δ for all N > c log ( 1/δ ) /ǫ2 for an absolute constant c. In the context of randomly initialized linear FNNs , this result means that , for a number of neurons N that satisfies N > c log ( 1/δ ) /ǫ2 , P ( |ρout − ρin| < ǫ ) ≥ 1− δ So , conceptually , the Johnson–Lindenstrauss lemma studies how inner products ( or geometry ) change , in expectation , after applying a random transformation and how well an average of these random transformations is concentrated around the expectation . This is the exact setting of randomly initialized neural networks . The random transformations consist of a random projection ( multiplying the dataset by a random matrix ) which is followed by a non-linearity . Naturally , adding a non-linearity complicates the picture . Let us focus on the case where the activation function is a rectified linear unit ( ReLU ) . That is , consider a random fully-connected layer with ReLU activation initialized with i.i.d . zero-mean Gaussian weights and two different inputs . For this case , Cho and Saul ( 2009 ) , Giryes et al . ( 2016 ) , and Daniely et al . ( 2016 ) proved that2 ρout ≈ ρ̄out = √ 1− ρ2in + ( π − cos−1 ( ρin ) ) ρin π . ( 2 ) Following Daniely et al . ( 2016 ) , we refer to the resulting function in ( 2 ) of ρ̄out in ρin as the dual activation of the ReLU ; this function is represented in figure 1 by the yellow curve . One may easily verify that the dual activation ( 2 ) of the ReLU satisfies ρout > ρin for ρin 6= 1 , meaning that it is a contraction . Consequently , for deep FNNs , which comprise multiple layers , random initialization results in a collapse of all inputs with the same norm ( a sphere ) to a single point at the output of the FNN ( equivalently , the entire dataset collapses to a single straight line ) . Intuitively , this collapse is an unfavorable starting point for optimization . To see why , consider the gradient ∇ w ( i ) j of the weight w ( i ) j of some neuron j in a deep layer i in a randomly initialized ReLU FNN . By the chain rule ( backpropagation ) , this gradient is proportional to the output of the previous layer a ( i−1 ) for the corresponding input , i.e. , it holds that ∇ w ( i ) j ∝ a ( i−1 ) . If the collapse is present already at layer i , this output is essentially proportional to a fixed vector a∗ . But this implies that , in the gradient update , the weights of the deep layer will move roughly along a straight line which would impede , in turn , the process of achieving linear separability . Indeed , it is considered that for a FNN to train well , its input–output Jacobian needs to exhibit dynamical isometry upon initialization ( Saxe et al. , 2014 ) . Namely , the singular values of the Jacobian ∂xout/∂xin must be concentrated around 1 , where xout and xin denote the input and output of the FNN , respectively . If the dataset collapses to a line , xout is essentially invariant to xin ( up to a change in its norm ) , suggesting that the singular values of ∂xout/∂xin are close to zero . Therefore , randomly initialized FNNs exhibit the opposite behavior from dynamical isometry and hence do not train well . 1.1 OUR CONTRIBUTION . Our main interest lies in the following question . Does the contraction observed in randomly initialized ReLU FNNs carry over to convolutional neural networks ( CNNs ) ? As we will show , qualitatively , the answer is yes . However , quantitatively , the answer is more subtle as is illustrated in figure 1 . In this figure , the similarity between pairs of inputs sampled at random 2The results in ( Cho and Saul , 2009 ) and ( Daniely et al. , 2016 ) were derived assuming unit-norm vectors ‖xin‖ = ‖yin‖ = 1 . The result here follows by the homogeneity of the ReLU activation function : R ( αx ) = αR ( x ) for α ≥ 0 , and ergodicity , assuming multiple filters are applied . from standard natural image datasets—Fashion MNIST ( F-MNIST ) , CIFAR-10 , and ImageNet—are displayed against the corresponding output of a randomly initialized CNN layer . For these datasets , clearly ρout ≈ ρin meaning that the relation of ReLU FNNs ( 2 ) —represented in figure 1 by the yellow curve—breaks down . That said , for inputs consisting of i.i.d . zero-mean Gaussians ( and filters comprising i.i.d . zero-mean Gaussian weights as before ) with a Pearson correlation coefficient ρ between corresponding entries ( and independent otherwise ) , the relation in ( 2 ) between ρ̄out and ρin of ReLU FNNs does hold for ReLU CNNs as well , as illustrated in figure 1a . This dataset-dependent behavior , observed in figure 1 , suggests that , in contrast to randomlyinitialized FNNs which behave according to ( 2 ) , randomly-initialized CNNs exhibit a richer behavior : ρ̄out does not depend only on ρin but on the inputs xin and yin themselves . Therefore , in this work , we characterize the behavior of ρ̄out after applying one layer in randomly initialized CNNs . We start by considering randomly initialized CNNs with general activation functions . We show in theorem 1 that the expected ( over the filters ) inner product E [ 〈Xout , Yout〉 ] and the mean similarity ρ̄out depend on xin and yin ( and not just 〈xin , yin〉 ) by extending the dual-activation notion of Daniely et al . ( 2016 ) . In theorem 2 , we further prove that , by taking multiple independent filters , 〈Xout , Yout〉 and ρout of ( 1 ) concentrate around E [ 〈Xout , Yout〉 ] and ρ̄out , respectively . We then specialize these results to linear CNNs ( with identity activation ) and derive a convolutionbased variant of the Johnson–Lindenstrauss lemma that shows that 〈Xout , Yout〉 ≈ 〈xin , yin〉 and ρout ≈ ρin for linear CNNs , both in expectation ( ρ̄out = ρin for the latter ) and with high probability . For randomly initialized ReLU CNNs , we derive the following tight upper and lower bounds for ρ̄out in terms of ρin in theorem 3 : max { ρin , 0 } ≤ ρ̄out ≤ 1 + ρin 2 . ( 3 ) These bounds imply , in turn , that for ρin 6= 1 each ReLU CNN layer is contracting . In theorem 4 we prove that ρ̄out for random Gaussian data satisfies the relation ( 2 ) , in accordance with figure 1a . To explain the ( almost ) isometric behavior of CNNs for natural images ( figure 1 ) , we note that many natural images consist of large , relative to the filter size , approximately monochromatic patches . This observation leads to a simple model of black and white ( binary ) images with “ large patches ” . To describe this model mathematically , we define a notion of a shared boundary between two images in definition 2 , and model large patches by bodies whose area is large compared to the shared boundary . We prove that ρ̄out ≈ ρin for this model , meaning that the lower bound in ( 3 ) is in fact tight . 1.2 RELATED WORK . In this paper , we study how various inputs are embedded by randomly initialized convolutional neural networks . Neural tangent kernel ( NTK ) ( Jacot et al. , 2018 ) is a related line of work . This setting studies the infinite width limit ( among other assumptions ) in which one can consider neural network training as regression over a fixed kernel ; this kernel is the NTK . There are two factors that affect the calculation of the NTK : The embedding of the input space at initialization and the gradients at initialization . In this paper we study the first one . Arora et al . ( 2019 ) , and Bietti and Mairal ( 2019 ) give expressions for the NTK and the convolutional NTK ( CNTK ) . Theorem 1 may be implicitly deduced from those expressions . Arora et al . ( 2019 ) provide concentration bounds for the NTK of fully connected networks with finite width . Bietti and Mairal ( 2019 ) derive smoothness properties for NTKs , e.g. , upper bounds on the deformation induced by the NTK in terms of the initial Euclidean distance between the inputs . A related approach to NTK is taken in ( Bietti , 2021 ) where convolutional kernel networks ( CKN ) are used . Standard initialization techniques use the Glorot initializtion ( Glorot and Bengio , 2010 ) and He initialization ( He et al. , 2015 ) . Both were introduced to prevent the gradients from exploding/vanishing . On a similar note , Hanin and Rolnick ( 2018 ) discuss how to prevent exploding/vanishing mean activation length—which corresponds to the gradients to some degree—in FNN with and without skip connections . For a comprehensive review on other techniques see ( Narkhede et al. , 2021-06-28 ) . Schoenholz et al . ( 2017 ) ; Poole et al . ( 2016 ) ; Yang and Schoenholz ( 2017 ) study the initialization of FNNs using mean-field theory , in a setting where the width of the network is infinite . They demonstrate that for some activation functions there exists an initialization variance such that the network does not suffer from vanishing/exploding gradients . In this case , the network is said to be initialized at the edge of chaos . As mentioned earlier , Saxe et al . ( 2014 ) introduced a stronger requirement than that of moderate gradients at initialization , in the form of dynamical isometry . for linear FNNs , they showed that orthogonal initialization achieves dynamical isometry whereas Gaussian i.i.d . initialization does not . For non-linear FNNs , Pennington et al . ( 2017 ) show that the hyperbolic tangent ( tanh ) activation can achieve dynamical isometry while ReLU FNNs with Gaussian i.i.d . initialization or orthogonal initialization can not . In contrast , Burkholz and Dubatovka ( 2019 ) show that ReLU FNNs achieve dynamical isometry by moving away from i.i.d . initialization . And lastly , Tarnowski et al . ( 2019 ) show that residual networks achieve dynamical isometry over a broad spectrum of activations ( including ReLU ) and initializations . Xiao et al . ( 2018 ) trained tanh ( and not ReLU ) CNNs with 10000 layers that achieve dynamical isometry using the delta-orthogonal initialization . Similarly , Zhang et al . ( 2019 ) trained residual CNNs with 10000 layers without batch normalization using fixup-initialization that prevents exploding/vanishing gradients . | This paper studies the random initialized CNNs and analyzes the geometry preservation. For linear CNNs, the authors show the JL lemma type results hold. For CNN+ ReLU, the output contracts and the level of contraction depend on the inputs. In numerical experiments, the authors verify the geometry of natural images is preserved, while for random gaussian correlated inputs, the contraction behavior appears. | SP:a9e9e9dc2db9691f3467630453a9a30711f04f4d |
A Johnson-Lindenstrauss Framework for Randomly Initialized CNNs | 1 INTRODUCTION . Neural networks have become a standard tool in multiple scientific fields , due to their success in classification ( and estimation ) tasks . Conceptually , this success is achieved since better representation is allowed by each subsequent layer until linear separability is achieved at the last ( linear ) layer . Indeed , in many disciplines involving real-world tasks , such as computer vision and natural language processing , the training process is biased toward these favorable representations . This bias is a product of several factors , with the neural-network initialization playing a pivotal role ( Sutskever et al. , 2013 ) . Therefore , we concentrate in this work on studying the initialization of neural networks , with the following question guiding this work . How does the geometric representation of a dataset change after the application of each randomly initialized layer of a neural network ? To answer this , we study how the following two geometrical quantities change after each layer . 〈x , y〉 The scalar inner product between vectors x and y.1 ρ : = 〈x , y〉‖x‖‖y‖ The cosine similarity ( or simply similarity ) between vectors x and y . The similarity ρ ∈ [ −1 , 1 ] between x and y equals ρ = cos ( θ ) , where θ is the angle between them . Consider first one layer of a fully connected neural network ( FNN ) with an identity activation ( linear FNN ) that is initialized by independent identically distributed ( i.i.d . ) Gaussian weights with mean zero and variance 1/N , where N is the number of neurons in that layer . This random linear FNN induces an isometric embedding of the dataset , namely , the similarity ρin between any two inputs , xin and yin , is preserved together with their norm : ρout ≈ ρ̄out = ρin , where ρout is the similarity between the resulting random ( due to the multiplication by the random weights ) outputs , Xout and Yout ( respectively ) , and ρ̄out is the mean output similarity defined by 1We mean here a vector in a wider sense : x and y may be matrices and tensors ( of the same dimensions ) . In this situation , the standard inner product is equal to the vectorization thereof : 〈x , y〉 = 〈vec ( x ) , vec ( y ) 〉 . ρout : = 〈Xout , Yout〉 ‖Xout‖ ‖Yout‖ , and ρ̄out : = E [ 〈Xout , Yout〉 ] √ E [ ‖Xout‖2 ] E [ ‖Yout‖2 ] . ( 1 ) The proof of this isometric relation between the input and output similarities follows from the celebrated Johnson–Lindenstrauss lemma ( Johnson and Lindenstrauss , 1984 ; Dasgupta and Gupta , 1999 ) . This lemma states that a random linear map of dimension N preserves the distance between any two points up to an ǫ > 0 contraction/expansion with probability at least 1 − δ for all N > c log ( 1/δ ) /ǫ2 for an absolute constant c. In the context of randomly initialized linear FNNs , this result means that , for a number of neurons N that satisfies N > c log ( 1/δ ) /ǫ2 , P ( |ρout − ρin| < ǫ ) ≥ 1− δ So , conceptually , the Johnson–Lindenstrauss lemma studies how inner products ( or geometry ) change , in expectation , after applying a random transformation and how well an average of these random transformations is concentrated around the expectation . This is the exact setting of randomly initialized neural networks . The random transformations consist of a random projection ( multiplying the dataset by a random matrix ) which is followed by a non-linearity . Naturally , adding a non-linearity complicates the picture . Let us focus on the case where the activation function is a rectified linear unit ( ReLU ) . That is , consider a random fully-connected layer with ReLU activation initialized with i.i.d . zero-mean Gaussian weights and two different inputs . For this case , Cho and Saul ( 2009 ) , Giryes et al . ( 2016 ) , and Daniely et al . ( 2016 ) proved that2 ρout ≈ ρ̄out = √ 1− ρ2in + ( π − cos−1 ( ρin ) ) ρin π . ( 2 ) Following Daniely et al . ( 2016 ) , we refer to the resulting function in ( 2 ) of ρ̄out in ρin as the dual activation of the ReLU ; this function is represented in figure 1 by the yellow curve . One may easily verify that the dual activation ( 2 ) of the ReLU satisfies ρout > ρin for ρin 6= 1 , meaning that it is a contraction . Consequently , for deep FNNs , which comprise multiple layers , random initialization results in a collapse of all inputs with the same norm ( a sphere ) to a single point at the output of the FNN ( equivalently , the entire dataset collapses to a single straight line ) . Intuitively , this collapse is an unfavorable starting point for optimization . To see why , consider the gradient ∇ w ( i ) j of the weight w ( i ) j of some neuron j in a deep layer i in a randomly initialized ReLU FNN . By the chain rule ( backpropagation ) , this gradient is proportional to the output of the previous layer a ( i−1 ) for the corresponding input , i.e. , it holds that ∇ w ( i ) j ∝ a ( i−1 ) . If the collapse is present already at layer i , this output is essentially proportional to a fixed vector a∗ . But this implies that , in the gradient update , the weights of the deep layer will move roughly along a straight line which would impede , in turn , the process of achieving linear separability . Indeed , it is considered that for a FNN to train well , its input–output Jacobian needs to exhibit dynamical isometry upon initialization ( Saxe et al. , 2014 ) . Namely , the singular values of the Jacobian ∂xout/∂xin must be concentrated around 1 , where xout and xin denote the input and output of the FNN , respectively . If the dataset collapses to a line , xout is essentially invariant to xin ( up to a change in its norm ) , suggesting that the singular values of ∂xout/∂xin are close to zero . Therefore , randomly initialized FNNs exhibit the opposite behavior from dynamical isometry and hence do not train well . 1.1 OUR CONTRIBUTION . Our main interest lies in the following question . Does the contraction observed in randomly initialized ReLU FNNs carry over to convolutional neural networks ( CNNs ) ? As we will show , qualitatively , the answer is yes . However , quantitatively , the answer is more subtle as is illustrated in figure 1 . In this figure , the similarity between pairs of inputs sampled at random 2The results in ( Cho and Saul , 2009 ) and ( Daniely et al. , 2016 ) were derived assuming unit-norm vectors ‖xin‖ = ‖yin‖ = 1 . The result here follows by the homogeneity of the ReLU activation function : R ( αx ) = αR ( x ) for α ≥ 0 , and ergodicity , assuming multiple filters are applied . from standard natural image datasets—Fashion MNIST ( F-MNIST ) , CIFAR-10 , and ImageNet—are displayed against the corresponding output of a randomly initialized CNN layer . For these datasets , clearly ρout ≈ ρin meaning that the relation of ReLU FNNs ( 2 ) —represented in figure 1 by the yellow curve—breaks down . That said , for inputs consisting of i.i.d . zero-mean Gaussians ( and filters comprising i.i.d . zero-mean Gaussian weights as before ) with a Pearson correlation coefficient ρ between corresponding entries ( and independent otherwise ) , the relation in ( 2 ) between ρ̄out and ρin of ReLU FNNs does hold for ReLU CNNs as well , as illustrated in figure 1a . This dataset-dependent behavior , observed in figure 1 , suggests that , in contrast to randomlyinitialized FNNs which behave according to ( 2 ) , randomly-initialized CNNs exhibit a richer behavior : ρ̄out does not depend only on ρin but on the inputs xin and yin themselves . Therefore , in this work , we characterize the behavior of ρ̄out after applying one layer in randomly initialized CNNs . We start by considering randomly initialized CNNs with general activation functions . We show in theorem 1 that the expected ( over the filters ) inner product E [ 〈Xout , Yout〉 ] and the mean similarity ρ̄out depend on xin and yin ( and not just 〈xin , yin〉 ) by extending the dual-activation notion of Daniely et al . ( 2016 ) . In theorem 2 , we further prove that , by taking multiple independent filters , 〈Xout , Yout〉 and ρout of ( 1 ) concentrate around E [ 〈Xout , Yout〉 ] and ρ̄out , respectively . We then specialize these results to linear CNNs ( with identity activation ) and derive a convolutionbased variant of the Johnson–Lindenstrauss lemma that shows that 〈Xout , Yout〉 ≈ 〈xin , yin〉 and ρout ≈ ρin for linear CNNs , both in expectation ( ρ̄out = ρin for the latter ) and with high probability . For randomly initialized ReLU CNNs , we derive the following tight upper and lower bounds for ρ̄out in terms of ρin in theorem 3 : max { ρin , 0 } ≤ ρ̄out ≤ 1 + ρin 2 . ( 3 ) These bounds imply , in turn , that for ρin 6= 1 each ReLU CNN layer is contracting . In theorem 4 we prove that ρ̄out for random Gaussian data satisfies the relation ( 2 ) , in accordance with figure 1a . To explain the ( almost ) isometric behavior of CNNs for natural images ( figure 1 ) , we note that many natural images consist of large , relative to the filter size , approximately monochromatic patches . This observation leads to a simple model of black and white ( binary ) images with “ large patches ” . To describe this model mathematically , we define a notion of a shared boundary between two images in definition 2 , and model large patches by bodies whose area is large compared to the shared boundary . We prove that ρ̄out ≈ ρin for this model , meaning that the lower bound in ( 3 ) is in fact tight . 1.2 RELATED WORK . In this paper , we study how various inputs are embedded by randomly initialized convolutional neural networks . Neural tangent kernel ( NTK ) ( Jacot et al. , 2018 ) is a related line of work . This setting studies the infinite width limit ( among other assumptions ) in which one can consider neural network training as regression over a fixed kernel ; this kernel is the NTK . There are two factors that affect the calculation of the NTK : The embedding of the input space at initialization and the gradients at initialization . In this paper we study the first one . Arora et al . ( 2019 ) , and Bietti and Mairal ( 2019 ) give expressions for the NTK and the convolutional NTK ( CNTK ) . Theorem 1 may be implicitly deduced from those expressions . Arora et al . ( 2019 ) provide concentration bounds for the NTK of fully connected networks with finite width . Bietti and Mairal ( 2019 ) derive smoothness properties for NTKs , e.g. , upper bounds on the deformation induced by the NTK in terms of the initial Euclidean distance between the inputs . A related approach to NTK is taken in ( Bietti , 2021 ) where convolutional kernel networks ( CKN ) are used . Standard initialization techniques use the Glorot initializtion ( Glorot and Bengio , 2010 ) and He initialization ( He et al. , 2015 ) . Both were introduced to prevent the gradients from exploding/vanishing . On a similar note , Hanin and Rolnick ( 2018 ) discuss how to prevent exploding/vanishing mean activation length—which corresponds to the gradients to some degree—in FNN with and without skip connections . For a comprehensive review on other techniques see ( Narkhede et al. , 2021-06-28 ) . Schoenholz et al . ( 2017 ) ; Poole et al . ( 2016 ) ; Yang and Schoenholz ( 2017 ) study the initialization of FNNs using mean-field theory , in a setting where the width of the network is infinite . They demonstrate that for some activation functions there exists an initialization variance such that the network does not suffer from vanishing/exploding gradients . In this case , the network is said to be initialized at the edge of chaos . As mentioned earlier , Saxe et al . ( 2014 ) introduced a stronger requirement than that of moderate gradients at initialization , in the form of dynamical isometry . for linear FNNs , they showed that orthogonal initialization achieves dynamical isometry whereas Gaussian i.i.d . initialization does not . For non-linear FNNs , Pennington et al . ( 2017 ) show that the hyperbolic tangent ( tanh ) activation can achieve dynamical isometry while ReLU FNNs with Gaussian i.i.d . initialization or orthogonal initialization can not . In contrast , Burkholz and Dubatovka ( 2019 ) show that ReLU FNNs achieve dynamical isometry by moving away from i.i.d . initialization . And lastly , Tarnowski et al . ( 2019 ) show that residual networks achieve dynamical isometry over a broad spectrum of activations ( including ReLU ) and initializations . Xiao et al . ( 2018 ) trained tanh ( and not ReLU ) CNNs with 10000 layers that achieve dynamical isometry using the delta-orthogonal initialization . Similarly , Zhang et al . ( 2019 ) trained residual CNNs with 10000 layers without batch normalization using fixup-initialization that prevents exploding/vanishing gradients . | The authors consider angle propagation in randomly-initialized convolutional neural network layers -- given two input images that have cosine angle $\rho$, what is the cosine angle of their feature embeddings when propagated through the random layer? They show that the behavior is very different from the standard feedforward case that has been discussed at length following the work of Daniely et al. (2016): for filters of small spatial support and e.g. ReLU activations, the output cosine angle can be as small as the ReLU of the input cosine angle, or as large as a linear function of the input cosine angle, depending on the patch structure of the input image. This has implications for understanding signal propagation and issues like dynamical isometry in convolutional networks. The authors give some measure concentration results, in the spirit of Daniely et al.'s results for feedforward networks, that specify the behavior in the convolutional case, and present examples of different types of images (unstructured/gaussian; cartoons; CIFAR-10 data) where different angle propagation behaviors are observed. | SP:a9e9e9dc2db9691f3467630453a9a30711f04f4d |
A Johnson-Lindenstrauss Framework for Randomly Initialized CNNs | 1 INTRODUCTION . Neural networks have become a standard tool in multiple scientific fields , due to their success in classification ( and estimation ) tasks . Conceptually , this success is achieved since better representation is allowed by each subsequent layer until linear separability is achieved at the last ( linear ) layer . Indeed , in many disciplines involving real-world tasks , such as computer vision and natural language processing , the training process is biased toward these favorable representations . This bias is a product of several factors , with the neural-network initialization playing a pivotal role ( Sutskever et al. , 2013 ) . Therefore , we concentrate in this work on studying the initialization of neural networks , with the following question guiding this work . How does the geometric representation of a dataset change after the application of each randomly initialized layer of a neural network ? To answer this , we study how the following two geometrical quantities change after each layer . 〈x , y〉 The scalar inner product between vectors x and y.1 ρ : = 〈x , y〉‖x‖‖y‖ The cosine similarity ( or simply similarity ) between vectors x and y . The similarity ρ ∈ [ −1 , 1 ] between x and y equals ρ = cos ( θ ) , where θ is the angle between them . Consider first one layer of a fully connected neural network ( FNN ) with an identity activation ( linear FNN ) that is initialized by independent identically distributed ( i.i.d . ) Gaussian weights with mean zero and variance 1/N , where N is the number of neurons in that layer . This random linear FNN induces an isometric embedding of the dataset , namely , the similarity ρin between any two inputs , xin and yin , is preserved together with their norm : ρout ≈ ρ̄out = ρin , where ρout is the similarity between the resulting random ( due to the multiplication by the random weights ) outputs , Xout and Yout ( respectively ) , and ρ̄out is the mean output similarity defined by 1We mean here a vector in a wider sense : x and y may be matrices and tensors ( of the same dimensions ) . In this situation , the standard inner product is equal to the vectorization thereof : 〈x , y〉 = 〈vec ( x ) , vec ( y ) 〉 . ρout : = 〈Xout , Yout〉 ‖Xout‖ ‖Yout‖ , and ρ̄out : = E [ 〈Xout , Yout〉 ] √ E [ ‖Xout‖2 ] E [ ‖Yout‖2 ] . ( 1 ) The proof of this isometric relation between the input and output similarities follows from the celebrated Johnson–Lindenstrauss lemma ( Johnson and Lindenstrauss , 1984 ; Dasgupta and Gupta , 1999 ) . This lemma states that a random linear map of dimension N preserves the distance between any two points up to an ǫ > 0 contraction/expansion with probability at least 1 − δ for all N > c log ( 1/δ ) /ǫ2 for an absolute constant c. In the context of randomly initialized linear FNNs , this result means that , for a number of neurons N that satisfies N > c log ( 1/δ ) /ǫ2 , P ( |ρout − ρin| < ǫ ) ≥ 1− δ So , conceptually , the Johnson–Lindenstrauss lemma studies how inner products ( or geometry ) change , in expectation , after applying a random transformation and how well an average of these random transformations is concentrated around the expectation . This is the exact setting of randomly initialized neural networks . The random transformations consist of a random projection ( multiplying the dataset by a random matrix ) which is followed by a non-linearity . Naturally , adding a non-linearity complicates the picture . Let us focus on the case where the activation function is a rectified linear unit ( ReLU ) . That is , consider a random fully-connected layer with ReLU activation initialized with i.i.d . zero-mean Gaussian weights and two different inputs . For this case , Cho and Saul ( 2009 ) , Giryes et al . ( 2016 ) , and Daniely et al . ( 2016 ) proved that2 ρout ≈ ρ̄out = √ 1− ρ2in + ( π − cos−1 ( ρin ) ) ρin π . ( 2 ) Following Daniely et al . ( 2016 ) , we refer to the resulting function in ( 2 ) of ρ̄out in ρin as the dual activation of the ReLU ; this function is represented in figure 1 by the yellow curve . One may easily verify that the dual activation ( 2 ) of the ReLU satisfies ρout > ρin for ρin 6= 1 , meaning that it is a contraction . Consequently , for deep FNNs , which comprise multiple layers , random initialization results in a collapse of all inputs with the same norm ( a sphere ) to a single point at the output of the FNN ( equivalently , the entire dataset collapses to a single straight line ) . Intuitively , this collapse is an unfavorable starting point for optimization . To see why , consider the gradient ∇ w ( i ) j of the weight w ( i ) j of some neuron j in a deep layer i in a randomly initialized ReLU FNN . By the chain rule ( backpropagation ) , this gradient is proportional to the output of the previous layer a ( i−1 ) for the corresponding input , i.e. , it holds that ∇ w ( i ) j ∝ a ( i−1 ) . If the collapse is present already at layer i , this output is essentially proportional to a fixed vector a∗ . But this implies that , in the gradient update , the weights of the deep layer will move roughly along a straight line which would impede , in turn , the process of achieving linear separability . Indeed , it is considered that for a FNN to train well , its input–output Jacobian needs to exhibit dynamical isometry upon initialization ( Saxe et al. , 2014 ) . Namely , the singular values of the Jacobian ∂xout/∂xin must be concentrated around 1 , where xout and xin denote the input and output of the FNN , respectively . If the dataset collapses to a line , xout is essentially invariant to xin ( up to a change in its norm ) , suggesting that the singular values of ∂xout/∂xin are close to zero . Therefore , randomly initialized FNNs exhibit the opposite behavior from dynamical isometry and hence do not train well . 1.1 OUR CONTRIBUTION . Our main interest lies in the following question . Does the contraction observed in randomly initialized ReLU FNNs carry over to convolutional neural networks ( CNNs ) ? As we will show , qualitatively , the answer is yes . However , quantitatively , the answer is more subtle as is illustrated in figure 1 . In this figure , the similarity between pairs of inputs sampled at random 2The results in ( Cho and Saul , 2009 ) and ( Daniely et al. , 2016 ) were derived assuming unit-norm vectors ‖xin‖ = ‖yin‖ = 1 . The result here follows by the homogeneity of the ReLU activation function : R ( αx ) = αR ( x ) for α ≥ 0 , and ergodicity , assuming multiple filters are applied . from standard natural image datasets—Fashion MNIST ( F-MNIST ) , CIFAR-10 , and ImageNet—are displayed against the corresponding output of a randomly initialized CNN layer . For these datasets , clearly ρout ≈ ρin meaning that the relation of ReLU FNNs ( 2 ) —represented in figure 1 by the yellow curve—breaks down . That said , for inputs consisting of i.i.d . zero-mean Gaussians ( and filters comprising i.i.d . zero-mean Gaussian weights as before ) with a Pearson correlation coefficient ρ between corresponding entries ( and independent otherwise ) , the relation in ( 2 ) between ρ̄out and ρin of ReLU FNNs does hold for ReLU CNNs as well , as illustrated in figure 1a . This dataset-dependent behavior , observed in figure 1 , suggests that , in contrast to randomlyinitialized FNNs which behave according to ( 2 ) , randomly-initialized CNNs exhibit a richer behavior : ρ̄out does not depend only on ρin but on the inputs xin and yin themselves . Therefore , in this work , we characterize the behavior of ρ̄out after applying one layer in randomly initialized CNNs . We start by considering randomly initialized CNNs with general activation functions . We show in theorem 1 that the expected ( over the filters ) inner product E [ 〈Xout , Yout〉 ] and the mean similarity ρ̄out depend on xin and yin ( and not just 〈xin , yin〉 ) by extending the dual-activation notion of Daniely et al . ( 2016 ) . In theorem 2 , we further prove that , by taking multiple independent filters , 〈Xout , Yout〉 and ρout of ( 1 ) concentrate around E [ 〈Xout , Yout〉 ] and ρ̄out , respectively . We then specialize these results to linear CNNs ( with identity activation ) and derive a convolutionbased variant of the Johnson–Lindenstrauss lemma that shows that 〈Xout , Yout〉 ≈ 〈xin , yin〉 and ρout ≈ ρin for linear CNNs , both in expectation ( ρ̄out = ρin for the latter ) and with high probability . For randomly initialized ReLU CNNs , we derive the following tight upper and lower bounds for ρ̄out in terms of ρin in theorem 3 : max { ρin , 0 } ≤ ρ̄out ≤ 1 + ρin 2 . ( 3 ) These bounds imply , in turn , that for ρin 6= 1 each ReLU CNN layer is contracting . In theorem 4 we prove that ρ̄out for random Gaussian data satisfies the relation ( 2 ) , in accordance with figure 1a . To explain the ( almost ) isometric behavior of CNNs for natural images ( figure 1 ) , we note that many natural images consist of large , relative to the filter size , approximately monochromatic patches . This observation leads to a simple model of black and white ( binary ) images with “ large patches ” . To describe this model mathematically , we define a notion of a shared boundary between two images in definition 2 , and model large patches by bodies whose area is large compared to the shared boundary . We prove that ρ̄out ≈ ρin for this model , meaning that the lower bound in ( 3 ) is in fact tight . 1.2 RELATED WORK . In this paper , we study how various inputs are embedded by randomly initialized convolutional neural networks . Neural tangent kernel ( NTK ) ( Jacot et al. , 2018 ) is a related line of work . This setting studies the infinite width limit ( among other assumptions ) in which one can consider neural network training as regression over a fixed kernel ; this kernel is the NTK . There are two factors that affect the calculation of the NTK : The embedding of the input space at initialization and the gradients at initialization . In this paper we study the first one . Arora et al . ( 2019 ) , and Bietti and Mairal ( 2019 ) give expressions for the NTK and the convolutional NTK ( CNTK ) . Theorem 1 may be implicitly deduced from those expressions . Arora et al . ( 2019 ) provide concentration bounds for the NTK of fully connected networks with finite width . Bietti and Mairal ( 2019 ) derive smoothness properties for NTKs , e.g. , upper bounds on the deformation induced by the NTK in terms of the initial Euclidean distance between the inputs . A related approach to NTK is taken in ( Bietti , 2021 ) where convolutional kernel networks ( CKN ) are used . Standard initialization techniques use the Glorot initializtion ( Glorot and Bengio , 2010 ) and He initialization ( He et al. , 2015 ) . Both were introduced to prevent the gradients from exploding/vanishing . On a similar note , Hanin and Rolnick ( 2018 ) discuss how to prevent exploding/vanishing mean activation length—which corresponds to the gradients to some degree—in FNN with and without skip connections . For a comprehensive review on other techniques see ( Narkhede et al. , 2021-06-28 ) . Schoenholz et al . ( 2017 ) ; Poole et al . ( 2016 ) ; Yang and Schoenholz ( 2017 ) study the initialization of FNNs using mean-field theory , in a setting where the width of the network is infinite . They demonstrate that for some activation functions there exists an initialization variance such that the network does not suffer from vanishing/exploding gradients . In this case , the network is said to be initialized at the edge of chaos . As mentioned earlier , Saxe et al . ( 2014 ) introduced a stronger requirement than that of moderate gradients at initialization , in the form of dynamical isometry . for linear FNNs , they showed that orthogonal initialization achieves dynamical isometry whereas Gaussian i.i.d . initialization does not . For non-linear FNNs , Pennington et al . ( 2017 ) show that the hyperbolic tangent ( tanh ) activation can achieve dynamical isometry while ReLU FNNs with Gaussian i.i.d . initialization or orthogonal initialization can not . In contrast , Burkholz and Dubatovka ( 2019 ) show that ReLU FNNs achieve dynamical isometry by moving away from i.i.d . initialization . And lastly , Tarnowski et al . ( 2019 ) show that residual networks achieve dynamical isometry over a broad spectrum of activations ( including ReLU ) and initializations . Xiao et al . ( 2018 ) trained tanh ( and not ReLU ) CNNs with 10000 layers that achieve dynamical isometry using the delta-orthogonal initialization . Similarly , Zhang et al . ( 2019 ) trained residual CNNs with 10000 layers without batch normalization using fixup-initialization that prevents exploding/vanishing gradients . | The paper shows (as clearly described in the introduction) two things: random filters have the distance preservation property. For random convolution + ReLU activation it is contracted (which is also expected, since ReLU throws away half of the entries). However, for images the authors try to explain why CNN+ReLU is isometric by considering images as objects with large monochromatic images. The main motivation for such research in practice is initialization. A deep fully-connected network with ReLU may lead to collapse, whereas for CNNs it does not. The paper argues that the randomized theory presented here is a justification for such fact for images. | SP:a9e9e9dc2db9691f3467630453a9a30711f04f4d |
GroupBERT: Enhanced Transformer Architecture with Efficient Grouped Structures | 1 INTRODUCTION . Deep neural networks have emerged as the leading solution to enabling end-to-end language processing ( Hochreiter & Schmidhuber , 1997 ; Sutskever et al. , 2014 ; Chung et al. , 2014 ) . Recently , the Transformer model based on the self-attention mechanism ( Vaswani et al. , 2017 ) has become the most promising architecture for language applications , especially when used as a pre-trained foundation model ( Devlin et al. , 2018 ; Radford et al. , 2019 ; Brown et al. , 2020 ; Bommasani et al. , 2021 ) . Attention based models are also increasingly showing promising results for established applications in domains different from natural language processing ( Dosovitskiy et al. , 2020 ) . Complementary to the Transformer ’ s improved ability to model long-range dependencies in sequences is its superior potential to scale to larger sizes ( Kaplan et al. , 2020 ) and its suitability for execution on existing accelerators . This makes these models favoured over traditional recurrent language models . Given the increased computational demand of these models , there is a growing and pressing interest to develop more efficient architectures ( Strubell et al. , 2019 ) . Some previous proposals were able to reduce the computational burden of the Transformer with improved task performance , but often with a corresponding slower execution , as will be discussed further in Section 4 . We demonstrate a set of modifications to the the structure of the Transformer layer that improve FLOP utilization by the encoder stack . The proposed GroupBERT model relies on grouped matrix multiplications and convolutions , and delivers a more efficient version of BERT , superior in both task performance and computation . GroupBERT utilizes grouped operations in a novel way which makes the model more FLOP efficient . However grouped operations are characterised by a reduced computational load for a given memory access ( Masters et al. , 2021 ) , i.e arithmetic intensity . This property would make them undesirable for traditional accelerators , which rely on large dense computations and reduced memory access . For this investigation we use the IPU ( Jia et al. , 2019 ) hardware , since its architecture uses on-chip SRAM for model execution , making it suitable to explore the potential of these efficient building blocks . We achieve a further task performance boost by increasing the depth of the model as GroupBERT extends each Transformer layer to contain four modules : one multi-head attention ( MHA ) , one grouped convolution module , and two grouped feed-forward modules ( GFFN ) . The MHA and grouped convolution modules process token information along the sequence dimension , and each is followed by the general computation GFFN module . While there are twice as many modules in the proposed GroupBERT layer , the overall increase in computation is modest as we utilize sparse grouped operations , for a total FLOP increase of about 60 % . Not only does GroupBERT deliver better performance per FLOP , but it is also executed faster as measured in total time-to-train . By employing both attention and convolution , the model has components dedicated to both short and long-range interactions , making a more efficient use of the more expensive attention mechanism . We also utilize the parameters of GroupBERT more efficiently during training , by discarding dropout for pre-training on a large corpus of text and by improving stability to use higher learning rates . With all these innovations , GroupBERT Base is only slightly larger than BERT Base , yet it achieves better validation MLM loss than BERT Large using less than half of its FLOPs . 2 ARCHITECTURE . In this work , we propose an efficient modification of the Transformer encoder layer called GroupBERT . The original Transformer layer consists of two modules : multi-head attention ( MHA ) and feedforward network ( FFN ) . Each of these modules also includes dropout , a shortcut connection , and layer normalization ( Srivastava et al. , 2014 ; He et al. , 2016 ; Ba et al. , 2016 ) . GroupBERT includes four modules in every layer , as illustrated in Figure 1 . We add a convolution module in sequence with the MHA to efficiently model local interactions between tokens and to allow the attention mechanism to focus on long-range interactions . We then complement every sequence processing block with a dedicated fully-connected module . For better efficiency , we introduce grouped projections to the FLOPs intensive FFN module , making the layer structure more FLOP efficient . 2.1 GROUPED FEED-FORWARD MODULES . The FFN module plays a crucial part in the unparalleled task performance of Transformers ( Dong et al. , 2021 ; Lee-Thorp et al. , 2021 ) . Although it is an essential complement to sequence processing modules it introduces a significant computational burden , as two thirds of the FLOPs are concentrated in the FFN module . To make this integral part of the transformer more lightweight we utilize structured sparsity in a form of sparsely grouped matrix multiplication . Consider a dense matrix multiplication of matrices H ∈ Ra×b and W ∈ Rb×c : ( HW ) i , j = ∆ b∑ n=1 hi , n · wn , j ( 1 ) A sparsely grouped version of W corresponds to a block diagonal matrix W ( G ) with G groups , a matrix of similar dimension to W and a sparsity ratio of 1/G . An equivalent alternative formulation of a block-diagonal matrix is a grouped convolution for a 1-dimensional 1× 1 convolution ( Iandola et al. , 2020 ) . This reduces the number of stored parameters , and can be implemented efficiently without zero-multiplication as : ( HW ( G ) ) i , j = ∆ b∑ n=1 hi , n · wn , j = b/G∑ n=1 h i , n + b G · bj−1 / c G c · w n + b G · bj−1 / c G c , j ( 2 ) We propose a novel scheme to utilize grouped transformations in a transformer FFN module . Our first finding is that parameters in the expanding FFN matrix contribute more to task performance , and sparsity is particularly damaging for these fan-out matrices . The second matrix is less sensitive to parameter reduction due to the sparse input and the reduction of projection dimension . Therefore , introducing sparsity in the second matrix results in a Pareto efficient balance between compute and task-performance . Our second finding is that the locality constraint of grouped projections on the hidden dimension is detrimental to the model . However we remove this constraint by using an output linear projection . This is similar to the output projection matrix used in the MHA block , where each attention head acts only on a slice of the hidden dimension . We find the optimal value for the number of groups to be G = 4 , bringing the parameter count of GFFN to be 75 % of its dense counterpart , while also delivering a more Pareto efficient task performance ( Fig 6 ) . Alternative schemes of applying grouped transformations ( Iandola et al. , 2020 ; Mehta et al. , 2021 ; Jiang et al. , 2020 ) ( Figure 2 ) in a transformer fail to outperform the baseline in a setting of BERT pre-training ( see Section 3.3 ) . 2.2 CONVOLUTION BLOCK . Sequential locality plays an important role for contextualizing tokens in language models . At the same time , long-range interactions have proven to be vital for state-of-the-art performance . Transformers inherently support long-range content-based interactions via self-attention and usually incorporate a form of positional encoding , allowing attention to also capture position-based interactions ( Dai et al. , 2019 ) . Although this gives self-attention strong representational power , a convolution is a more efficient implementation of strictly local , position-based fusion . For this reason we adopt a dedicated convolutional module to improve overall efficiency . The design of our convolution module is similar to Gulati et al . ( 2020 ) , in which convolutions were introduced into a speech recognition Transformer . We apply a gate consisting of a pointwise convolution followed by a Gated Linear Unit ( GLU ) that has been beneficial in language applications ( Dauphin et al. , 2017 ; Wu et al. , 2019a ; 2020 ) . Unlike Gulati et al . ( 2020 ) , we use grouped convolutions in place of depthwise convolutions to add representational capacity . We find that the best trade-off between task performance and computational cost is achieved by using a grouped convolution with group size 16 and kernel size 7 , computed over the sequence dimension . The module also includes an additional layer normalization and a Swish activation ( Ramachandran et al. , 2017 ) . With this module included , fewer attention heads show a strong locality preference since such interactions are readily captured by convolutions . This effect is visible in the attention maps of Figure 3 , showing weaker locality in the model that includes convolutions . To measure this effect quantitatively , we calculate the entropy across target positions for each head and source position . We then average , and normalize by the maximum possible value ( see Appendix C ) . For this measure , zero means that every head attends to a single position exclusively , while one means that every head is position agnostic , although there could still be a joint position and content term . BERT Base has an average entropy ratio of 0.75 and BERT Base + Conv has 0.92 , indicating a shift of positional fusion work from attention to convolution . 2.3 EFFICIENT PARAMETER UTILIZATION . In line with earlier research on the Transformer architecture ( Wang et al. , 2019 ; Liu et al. , 2020 ; Xiong et al. , 2020 ) , we move layer normalization ( Ba et al. , 2016 ) from its position after the module ’ s residual ( `` postnorm '' ) to the first position within each residual block ( `` prenorm '' ) . While this modification does not directly improve task performance , it stabilizes training and allows the use of a larger learning rate that would otherwise trigger the model with postnorm to diverge . We increase the learning rate by a factor of 4× compared to the postnorm baseline . Similarly to Lan et al . ( 2020 ) , we find the use of dropout to be detrimental to the pre-training stage . Due to the substantial size of the dataset , this kind of regularization is not required . While removing dropout yields improvements to the pre-training loss , this does not apply to downstream tasks that rely on smaller datasets . Consequently , we include dropout only when fine-tuning on supervised tasks , that have smaller datasets than the pre-training corpus . 3 RESULTS . To evaluate the architecture modifications , we chose BERT ( Devlin et al. , 2018 ) pre-training and fine-tuning . The large dataset and challenging training objective mean that task performance improves consistently with model size ( Lan et al. , 2020 ) and the risk of over-fitting is reduced . This makes it possible to clearly distinguish architecture modifications that benefit efficiency . Our evaluation of GroupBERT for language representation learning shows that the architecture is : 1 . Training FLOP-efficient across a range of model sizes ( Sections 3.3 , 3.4 ) . 2 . Training time-efficient across a range of compute budgets ( Sections 3.3 , 3.4 ) . 3 . Improved by each constituent part ( Section 3.5 ) . 3.1 EXPERIMENTS . Each experiment consists of two pre-training phases and a fine-tuning phase consisting of multiple training runs , started from the pre-trained model . All phases use the AdamW optimiser ( Loshchilov & Hutter , 2019 ) , with β1 = 0.9 , β2 = 0.999 , = 10−6 . The learning rate follows a linear warm-up decay schedule , whereby the warmup phase lasts for min ( 104 , 0.1· total steps ) steps , and the peak learning rate depends on the training phase and model size . The model is defined over a vocabulary of 30522 WordPiece tokens ( Wu et al. , 2016 ) . Weights are initialized using a truncated normal distribution of standard deviation 0.02 . For all experiments we use 2 Graphcore M2000 IPU systems . Pre-training phase one optimises the Masked Language Model ( MLM ) and Next-Sentence Prediction ( NSP ) loss for corrupted sentence pairs . Masked and padded sequences of length 128 are grouped into batches of approximately 512 sequences , with slight variations depending on the model size ( see Appendix A ) . The model is trained for 10 epochs of Wikipedia + BookCorpus ( Zhu et al. , 2015 ) , corresponding to approximately 8 ·105 optimisation steps . For all experiments with GroupBERT , baseline BERT and all other models tested , the learning rate is set to the value that produces the best validation loss . Pre-training phase two uses sequence length 384 , 5 epochs , and approximately 2·105 optimisation steps . SQuAD 1.1 fine-tuning ( Rajpurkar et al. , 2016 ) adds a token span prediction layer and the whole model is fine-tuned to perform extractive question answering . Training uses target batch size 32 and we train for 2-3 epochs with various learning rates ( Appendix B ) and report results for the best hyperparameters setting . We report F1 and Exact match scores , which show higher variance than MLM loss values . On the grounds of larger variance , we fine-tune each pre-training checkpoint five times using different seeds for every hyperparameter setting . Fine-tuning has been shown to be quite a brittle process in recent studies ( Dodge et al. , 2020 ; Zhang et al. , 2021 ; Mosbach et al. , 2021 ) . In particular , many instabilities are caused by fine-tuning without using bias correction , an implementation that was adopted following the original experimental setup of BERT . This omission in the optimizer was observed to cause a collapse of the training process . For this reason , we included a bias-correction term to the AdamW implementation for fine-tuning . | The authors propose a new Transformer architecture. Compared the the traditional Transformer, the proposed architecture has two new features.:1) grouped FFN and 2) a convolution module. The grouped FFN is an efficient replacement of the original FFN. The convolution module is responsible for capturing local dependencies. Experiments demonstrate that the proposed variant achieves up to 2x efficiency gain in terms of both FLOPs and time-to-train. | SP:dba8923c9ede403fef8ced3bd8409b54ff3c229a |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.