paper_name
stringlengths
11
170
text
stringlengths
8.07k
307k
summary
stringlengths
152
6.16k
paper_id
stringlengths
43
43
Transformer protein language models are unsupervised structure learners
1 INTRODUCTION . Unsupervised modeling of protein contacts has an important role in computational protein design ( Russ et al. , 2020 ; Tian et al. , 2018 ; Blazejewski et al. , 2019 ) and is a central element of all current state-of-the-art structure prediction methods ( Wang et al. , 2017 ; Senior et al. , 2020 ; Yang et al. , 2019 ) . The standard bioinformatics pipeline for unsupervised contact prediction includes multiple components with specialized tools and databases that have been developed and optimized over decades . In this work we propose replacing the current multi-stage pipeline with a single forward pass of a pre-trained end-to-end protein language model . In the last year , protein language modeling with an unsupervised training objective has been investigated by multiple groups ( Rives et al. , 2019 ; Alley et al. , 2019 ; Heinzinger et al. , 2019 ; Rao et al. , 2019 ; Madani et al. , 2020 ) . The longstanding practice in bioinformatics has been to fit linear models on focused sets of evolutionarily related and aligned sequences ; by contrast , protein language modeling trains nonlinear deep neural networks on large databases of evolutionarily diverse and unaligned sequences . High capacity protein language models have been shown to learn underlying intrinsic properties of proteins such as structure and function from sequence data ( Rives et al. , 2019 ) . A line of work in this emerging field proposes the Transformer for protein language modeling ( Rives et al. , 2019 ; Rao et al. , 2019 ) . Originally developed in the NLP community to represent long range context , the main innovation of the Transformer model is its use of self-attention ( Vaswani et al. , 2017 ) . Self-attention has particular relevance for the modeling of protein sequences . Unlike convolutional or recurrent models , the Transformer constructs a pairwise interaction map between all positions in the sequence . In principle this mechanism has an ideal form to model protein contacts . In theory , end-to-end learning with a language model has advantages over the bioinformatics pipeline : ( i ) it replaces the expensive query , alignment , and training steps with a single forward ∗Work performed during an internship at Facebook . 1Weights for all ESM-1 and ESM-1b models , as well as regressions trained on these models can be found at https : //github.com/facebookresearch/esm . pass , greatly accelerating feature extraction ; and ( ii ) it shares parameters for all protein families , enabling generalization by capturing commonality across millions of evolutionarily diverse and unrelated sequences . We demonstrate that Transformer protein language models learn contacts in the self-attention maps with state-of-the-art performance . We compare ESM-1b ( Rives et al. , 2020 ) , a large-scale ( 650M parameters ) Transformer model trained on UniRef50 ( Suzek et al. , 2007 ) to the Gremlin ( Kamisetty et al. , 2013 ) pipeline which implements a log linear model trained with pseudolikelihood ( Balakrishnan et al. , 2011 ; Ekeberg et al. , 2013 ) . Contacts can be extracted from the attention maps of the Transformer model by a sparse linear combination of attention heads identified by logistic regression . ESM-1b model contacts have higher precision than Gremlin contacts . When ESM and Gremlin are compared with access to the same set of sequences the precision gain from the protein language model is significant ; the advantage holds on average even when Gremlin is given access to an optimized set of multiple sequence alignments incorporating metagenomics data . We find a linear relationship between language modeling perplexity and contact precision . We also find evidence for the value of parameter sharing : the ESM-1b model significantly outperforms Gremlin on proteins with low-depth MSAs . Finally we explore the Transformer language model ’ s ability to generate sequences and show that generated sequences preserve contact information . 2 BACKGROUND . Multiple Sequence Alignments ( MSAs ) A multiple sequence alignment consists of a set of evolutionarily related protein sequences . Since real protein sequences are likely to have insertions , deletions , and substitutions , the sequences are aligned by minimizing a Levenshtein distance-like metric over all the sequences . In practice heuristic alignment schemes are used . Tools like Jackhmmer and HHblits can increase the number and diversity of sequences returned by iteratively performing the search and alignment steps ( Johnson et al. , 2010 ; Remmert et al. , 2012 ) . Metrics For a protein of length L , we evaluate the precision of the top L , L/2 , and L/5 contacts for short range ( |i − j| ∈ [ 6 , 12 ) ) , medium range ( |i − j| ∈ [ 12 , 24 ) ) , and long range ( |i = j| ∈ [ 24 , ∞ ) ) contacts . We also separately evaluate local contacts ( |i−j| ∈ [ 3 , 6 ) ) for secondary structure prediction in Appendix A.9 . In general , all contacts provide information about protein structure and important interactions , with shorter-range contacts being useful for secondary and local structure , while longer range contacts are useful for determining global structure ( Taylor et al. , 2014 ) . 3 RELATED WORK . There is a long history of protein contact prediction ( Adhikari & Cheng , 2016 ) both from MSAs , and more recently , with protein language models . Supervised contact prediction Recently , supervised methods using deep learning have resulted in breakthrough results in supervised contact prediction ( Wang et al. , 2017 ; Jones & Kandathil , 2018 ; Yang et al. , 2019 ; Senior et al. , 2020 ; Adhikari & Elofsson , 2020 ) . State-of-the art methods use deep residual networks trained with supervision from many protein structures . Inputs are typically covariance statistics ( Jones & Kandathil , 2018 ; Adhikari & Elofsson , 2020 ) , or inferred coevolutionary parameters ( Wang et al. , 2017 ; Liu et al. , 2018 ; Senior et al. , 2020 ; Yang et al. , 2019 ) . Other recent work with deep learning uses sequences or evolutionary features as inputs ( AlQuraishi , 2018 ; Ingraham et al. , 2019 ) . Xu et al . ( 2020 ) demonstrates the incorporation of coevolutionary features is critical to performance of current state-of-the-art methods . Unsupervised contact prediction In contrast to supervised methods , unsupervised contact prediction models are trained on sequences without information from protein structures . In principle this allows them to take advantage of large sequence databases that include information from many sequences where no structural knowledge is available . The main approach has been to learn evolutionary constraints among a set of similar sequences by fitting a Markov Random Field ( Potts model ) to the underlying MSA , a technique known as Direct Coupling Analysis ( DCA ) . This was proposed by Lapedes et al . ( 1999 ) and reintroduced by Thomas et al . ( 2008 ) and Weigt et al . ( 2009 ) . Various methods have been developed to fit the underlying Markov Random Field , including meanfield DCA ( mfDCA ) ( Morcos et al. , 2011 ) , sparse inverse covariance ( PSICOV ) ( Jones et al. , 2011 ) and pseudolikelihood maximization ( Balakrishnan et al. , 2011 ; Ekeberg et al. , 2013 ; Seemayer et al. , 2014 ) . Pseudolikelihood maximization is generally considered state-of-the-art for unsupervised contact prediction and the Gremlin ( Balakrishnan et al. , 2011 ) implementation is used as the baseline throughout . We also provide mfDCA and PSICOV baselines . Recently deep learning methods have also been applied to fitting MSAs , and Riesselman et al . ( 2018 ) found evidence that factors learned by a VAE model may correlate with protein structure . Structure prediction from contacts While we do not perform structure prediction in this work , many methods have been proposed to extend contact prediction to structure prediction . For example , EVFold ( Marks et al. , 2011 ) and DCAFold ( Sulkowska et al. , 2012 ) predict co-evolving couplings using a Potts Model and then generate 3D conformations by directly folding an initial conformation with simulated annealing , using the predicted residue-residue contacts as constraints . Similarly , FragFold ( Kosciolek & Jones , 2014 ) and Rosetta ( Ovchinnikov et al. , 2016 ) incorporate constraints from a Potts Model into a fragment assembly based pipeline . Senior et al . ( 2019 ) , use features from a Potts model fit with pseudolikelihood maximization to predict pairwise distances with a deep residual network and optimize the final structure using Rosetta . All of these works build directly upon the unsupervised contact prediction pipeline . Contact prediction from protein language models Since the introduction of large scale language models for natural language processing ( Vaswani et al. , 2017 ; Devlin et al. , 2019 ) , there has been considerable interest in developing similar models for proteins ( Alley et al. , 2019 ; Rives et al. , 2019 ; Heinzinger et al. , 2019 ; Rao et al. , 2019 ; Elnaggar et al. , 2020 ; Lu et al. , 2020 ; Madani et al. , 2020 ; Shen et al. , 2021 ) . Rives et al . ( 2019 ) were the first to study protein Transformer language models , demonstrating that information about residue-residue contacts could be recovered from the learned representations by linear projections supervised with protein structures . Recently Vig et al . ( 2020 ) performed an extensive analysis of Transformer attention , identifying correspondences to biologically relevant features , and also found that different layers of the model are responsible for learning different features . In particular Vig et al . ( 2020 ) discovered a correlation between selfattention maps and contact patterns , suggesting they could be used for contact prediction . Prior work benchmarking contact prediction with protein language models has focused on the supervised problem . Bepler & Berger ( 2019 ) were the first to fine-tune an LSTM pretrained on protein sequences to fit contacts . Rao et al . ( 2019 ) and Rives et al . ( 2020 ) perform benchmarking of multiple protein language models using a deep residual network fit with supervised learning on top of pretrained language modeling features . In contrast to previous work on protein language models , we find that a state-of-the-art unsupervised contact predictor can be directly extracted from the Transformer self-attention maps . We perform a thorough analysis of the contact predictor , showing relationships between performance and MSA depth as well as language modeling perplexity . We also provide methods for improving performance using sequences from an MSA and for sampling sequences in a manner that preserves contacts . 4 MODELS . We compare Transformer models trained on large sequence databases to Potts Models trained on individual MSAs . While Transformers and Potts Models emerged in separate research communities , the two models share core similarities ( Wang & Cho , 2019 ) which we exploit here . Our main result is that just as Gremlin directly represents contacts via its pairwise component ( the weights ) , the Transformer also directly represents contacts via its pairwise component ( the self-attention ) . 4.1 OBJECTIVES . For a set of training sequences , X , Gremlin optimizes the following pseudolikelihood loss , where a single position is masked and predicted from its context . Inputs are aligned , so all have length L : LPLL ( X ; θ ) = E x∼X L∑ i=1 log p ( xi|xj 6=i ; θ ) ( 1 ) The masked language modeling ( MLM ) loss used by the Transformer models can be seen as a generalization of the Potts Model objective when written as follows : LMLM ( X ; θ ) = E x∼X E mask ∑ i∈mask log p ( xi|xj 6∈mask ; θ ) ( 2 ) In contrast to Gremlin , the MLM objective applied by protein language modeling is trained on unaligned sequences . The key distinction of MLM is to mask and predict multiple positions concurrently , instead of masking and predicting one at a time . This enables the model to scale beyond individual MSAs to massive sequence datasets . In practice , the expectation under the masking pattern is computed stochastically using a single sample at each epoch .
The paper performs a number of analyses centered around the ability of transformer-based language models trained on protein sequence data to learn representations useful for predicting protein secondary and tertiary structure (the latter as contact maps). Specifically, the paper studies several pre-trained transformer models by fitting an L1-penalized logistic regression to amino acid pair contacts. Several experiments are performed to showcase that (i) transformer-based representations can outperform state-of-the art methods based on MSA in terms of contact prediction precision; (ii) that the necessary information for contact predictions in these representations is learned in an unsupervised manner (and not by the logistic regression put on top of these representations); and (iii) that the contact prediction probabilities are reasonably well calibrated.
SP:a826495e7d92c3cd68a71fc4961c296fec0307ed
Guiding Representation Learning in Deep Generative Models with Policy Gradients
1 INTRODCTION . Reinforcement Learning ( RL ) gained much popularity in recent years by outperforming humans in games such as Atari ( Mnih et al . ( 2015 ) ) , Go ( Silver et al . ( 2016 ) ) and Starcraft 2 ( Vinyals et al . ( 2017 ) ) . These results were facilitated by combining novel machine learning techniques such as deep neural networks ( LeCun et al . ( 2015 ) ) with classical RL methods . The RL framework has shown to be quite flexible and has been applied successfully in many further domains , for example , robotics ( Andrychowicz et al . ( 2020 ) ) , resource management ( Mao et al . ( 2016 ) ) or physiologically accurate locomotion ( Kidziński et al . ( 2018 ) ) . The goal of representation learning is to learn a suitable representation for a given application domain . Such a representation should contain useful information for a particular downstream task and capture the distribution of explanatory factors ( Bengio et al . ( 2013 ) ) . Typically , the choice of a downstream task influences the choice of method for representation learning . While Generative Adversarial Network ( GAN ) s are frequently used for tasks that require high-fidelity reconstructions or generation of realistic new data , auto-encoder based methods have been more common in RL . Recently , many such approaches employed the Variational Auto Encoder ( VAE ) ( Kingma & Welling ( 2013 ) ) framework which aims to learn a smooth representation of its domain . Most of these approaches follow the same pattern : First , they build a dataset of states from the RL environment . Second , they train the VAE on this static dataset and lastly train the RL mode using the VAE ’ s representation . While this procedure generates sufficiently good results for certain scenarios , there are some fundamental issues with this method . Such an approach assumes that it is possible to collect enough data and observe all task-relevant states in the environment without knowing how to act in it . As a consequence , when learning to act the agent will only have access to a representation that is optimized for the known and visited states . As soon as the agent becomes more competent , it might experience novel states that have not been visited before and for which there is no good representation ( in the sense that the experienced states are out of the original learned distribution and the mapping is not appropriate ) . Another issue arises from the manner the representation is learned . Usually , the VAE is trained in isolation , so it decides what features are learned based on its own objective function and not on what is helpful for the downstream task . Mostly , such a model is tuned for good reconstruction . Without the information from the RL model , such a representation does not reflect what is important for the downstream task . As a consequence , the VAE might omit learning features that are crucial for good performance on the task because they appear negligible with respect to reconstruction ( Goodfellow et al . ( 2016 ) , Chapter 15 , Figure 15.5 ) . For example , small objects in pixel-space are ignored as they affect a reconstruction based loss only marginally . Thus , any downstream task using such a representation will have no access to information about such objects . A good example for such a task is Atari Breakout , a common RL benchmark . Figures 1a and 1b show an original Breakout frame and its reconstruction . While the original frame contains the ball in the lower right hand corner , this crucial feature is missing completely in the reconstruction . We approach this issue through simultaneously learning representation and RL task , that is by combining the training of both models . As an advantage , this abolishes the need of collecting data before knowing the environment as it combines VAE and RL objectives . In consequence the VAE has an incentive to represent features that are relevant to the RL model . The main contributions of this paper are as follows : First we show that combined learning is possible and that it yields good performing policies . Second , we show that jointly trained representations incorporate additional , task-specific information which allows a RL agent to achieve higher rewards then if it was trained on a static representation . This will be shown indirectly by comparing achieved rewards as well as directly through an analysis of the trained model and its representation . 2 RELATED WORK . Lange & Riedmiller ( 2010 ) explored Auto Encoder ( AE ) ( Lecun ( 1987 ) ; Bourlard & Kamp ( 1988 ) ; Hinton & Zemel ( 1994 ) ) as a possible pre-processor for RL algorithms . The main focus in their work was finding good representations for high dimensional state spaces that enables policy learning . As input , rendered images from the commonly used grid world environment were used . The agent had to manoeuvre through a discretized map using one of four discrete movement actions per timestep . It received a positive reward once reaching the goal tile and negative rewards elsewhere . The AE bottleneck consisted only of two neurons , which corresponds to the dimensionality of the environemnt ’ s state . Fitted Q-Iteration ( FQI ) ( Ernst et al . ( 2005 ) ) was used to estimate the Q-function , which the agent then acted -greedy upon . Besides RL , they also used the learned representation to classify the agents position given an encoding using a Multi-Layer Perceptron ( MLP ) ( Rumelhart et al . ( 1985 ) ) . For these experiments , they found that adapting the encoder using MLP gradients lead to an accuracy of 99.46 % . However , they did not apply this approach to their RL task . A compelling example for separate training of meaningful representation is provided by Higgins et al . ( 2017b ) who proposed a framework called DARLA . They trained RL agents on the encoding of a β-VAE ( Higgins et al . ( 2016 ) ; Higgins et al . ( 2017a ) ) with the goal of zero-shot domain transfer . In their approach , β-VAE and agent were trained separately on a source domain and then evaluated in a target domain . Importantly , source and target domain are similar to a certain extent and only differ in some features , e.g . a blue object in the source domain might be red in the target domain . During training of the β-VAE , the pixel-based reconstruction loss was replaced with a loss calculated in the latent space of a Denoising Auto Encoder ( DAE ) ( Vincent et al . ( 2008 ) ) . Thereby their approach avoids missing task relevant feature encodings at the cost of training another model . For one of their evaluation models , they allowed the RL gradients to adapt the encoder . Their results show that subsequent encoder learning improves performance of Deep Q-Learning ( DQN ) but decreases performance of Asynchronous Advantage Actor-Critic ( A3C ) ( Mnih et al . ( 2016 ) ) . Ha & Schmidhuber ( 2018 ) proposed a combination of VAE , Recurrent Neural Networks ( RNN ) ( Hochreiter & Schmidhuber ( 1997 ) ) and a simple policy as a controller . They hypothesized that by learning a good representation of the environment and having the ability to predict future states , learning the policy itself becomes a trivial task . Like in most other models , the VAE was pre-trained on data collected by a random policy . Only the RNN and the controller were trained online . The compressed representation from the VAE was passed into a RNN in order to estimate a probability density for the subsequent state . The controller was deliberately chosen as a single linear layer and could thus be optimized with Covariance Matrix Adaptation - Evolution Strategy ( CMA-ES ) ( Hansen ( 2006 ) ) . This work demonstrated how a VAE can provide a versatile representation that can be utilized in reinforcement learning . In addition , such an approach allows to predict the subsequent encoded state . While these findings encourage the usage of VAE in conjunction with RL , this is only possible in environments where the state space can be explored sufficiently by a random policy . However , if the policy can only discover important features after acquiring a minimal level of skill , sampling the state space using a random policy will not yield high-performing agents . Learning such features would only be possible if the VAE is continuously improved during policy training . Another interesting combination of VAEs and RL was recently proposed by Yang et al . ( 2019 ) , with their so called Action-Conditional Variational Auto-Encoder ( AC-VAE ) . Their motivation for creating this model was to train a transparent , interpretable policy network . Usually , the β-VAEs decoder is trained to reconstruct the input based on the representation the encoder produced . In this work though , the decoders objective was to predict the subsequent state st+1 . As input it got the latent space vector z combined with an action-mapping-vector , which is the action vector at with a zero-padding to match the latent spaces dimensionality . Inspecting the decoder estimates for st+1 when varying one dimension of the latent space showed , that each dimension encoded a possible subsequent state that is likely to be encountered if the corresponding action from this dimension was taken . Unfortunately , the authors did not report any rewards they achieved on Breakout , hence it was not possible for us to compare model performances . 3 COMBINATION OF REINFORCEMENT AND REPRESENTATION LEARNING OBJECTIVES . In this section , we will first revisit the fundamentals of RL and VAEs and discuss their different objective functions . Then , we propose a joint objective function that allows for joint training of both models using gradient descent based learning methods . 3.1 REINFORCEMENT LEARNING WITH POLICY OPTIMIZATION . RL tries to optimize a Markov Decision Process ( MDP ) ( Bellman ( 1957 ) ) that is given by the tuple 〈S , A , r , p , γ〉 . S denotes the state space , A the action space and p : S × R × S × A → [ 0 , 1 ] the environment ’ s dynamics function that , provided a state-action pair , gives the state distribution for the successor state . r : S × A → R is the reward and γ ∈ [ 0 , 1 ) the scalar discount factor . The policy πθ ( a|s ) is a stochastic function that gives a probability distribution over actions for state s. θ denotes the policy ’ s parameter vector which is typically subject to optimization . A trajectory τ = ( s0 , a0 , ... , sT , aT ) consisting of an alternating sequence of states and actions can be sampled in the environment , where T stands for the final timestep of the trajectory and ai ∼ πθ ( ai|si ) . The overarching goal of RL is to find a policy that maximizes the average collected reward over all trajectories . This can be expressed as the optimization problem maxEτ∼p ( τ ) [ ∑ t r ( s , a ) ] , which can also be written in terms of an optimal policy parameter vector θ∗ = argmaxθ Eτ∼p ( τ ) [ ∑ t r ( s , a ) ] . When trying to optimize the policy directly be searching for θ∗ , policy optimization algorithms like A3C , Actor-Critic with Experience Replay ( ACER ) ( Wang et al . ( 2016a ) ) , Trust Region Policy Optimization ( TRPO ) ( Schulman et al . ( 2015a ) ) or Proximal Policy Optimization ( PPO ) ( Schulman et al . ( 2017 ) ) are commonly used . The fundamental idea behind policy optimization techniques is to calculate gradients of the RL objective with respect to the policy parameters : ∇θJ ( θ ) = E τ∼p ( τ ) [ ∇θ logπθ ( τ ) r ( τ ) ] ( 1 ) where we defined ∑T t=0 r ( s , a ) = r ( τ ) for brevity . However , most policy optimization methods introduce heavy modifications to this vanilla gradient in order to achieve more stable policy updates . Throughout our work , we have used PPO as RL algorithm because it is quite sample efficient and usually produces stable policy updates . For an in-depth description of PPO , we refer to our A.1 or the original work Schulman et al . ( 2017 ) .
This paper proposes a method for reinforcement learning with representations learned from a VAE. The VAE is used to encode the states (images) and the mean of the posterior is used as input to the policy. The VAE is trained using the variational lower bound and the policy is optimized using PPO. The model can be jointly trained with VAE and RL losses from scratch, or the VAE can be pre-trained and fixed, or pre-trained and finetuned.
SP:8a9e9fec36d06b122a226ccac91a869963b148b2
Guiding Representation Learning in Deep Generative Models with Policy Gradients
1 INTRODCTION . Reinforcement Learning ( RL ) gained much popularity in recent years by outperforming humans in games such as Atari ( Mnih et al . ( 2015 ) ) , Go ( Silver et al . ( 2016 ) ) and Starcraft 2 ( Vinyals et al . ( 2017 ) ) . These results were facilitated by combining novel machine learning techniques such as deep neural networks ( LeCun et al . ( 2015 ) ) with classical RL methods . The RL framework has shown to be quite flexible and has been applied successfully in many further domains , for example , robotics ( Andrychowicz et al . ( 2020 ) ) , resource management ( Mao et al . ( 2016 ) ) or physiologically accurate locomotion ( Kidziński et al . ( 2018 ) ) . The goal of representation learning is to learn a suitable representation for a given application domain . Such a representation should contain useful information for a particular downstream task and capture the distribution of explanatory factors ( Bengio et al . ( 2013 ) ) . Typically , the choice of a downstream task influences the choice of method for representation learning . While Generative Adversarial Network ( GAN ) s are frequently used for tasks that require high-fidelity reconstructions or generation of realistic new data , auto-encoder based methods have been more common in RL . Recently , many such approaches employed the Variational Auto Encoder ( VAE ) ( Kingma & Welling ( 2013 ) ) framework which aims to learn a smooth representation of its domain . Most of these approaches follow the same pattern : First , they build a dataset of states from the RL environment . Second , they train the VAE on this static dataset and lastly train the RL mode using the VAE ’ s representation . While this procedure generates sufficiently good results for certain scenarios , there are some fundamental issues with this method . Such an approach assumes that it is possible to collect enough data and observe all task-relevant states in the environment without knowing how to act in it . As a consequence , when learning to act the agent will only have access to a representation that is optimized for the known and visited states . As soon as the agent becomes more competent , it might experience novel states that have not been visited before and for which there is no good representation ( in the sense that the experienced states are out of the original learned distribution and the mapping is not appropriate ) . Another issue arises from the manner the representation is learned . Usually , the VAE is trained in isolation , so it decides what features are learned based on its own objective function and not on what is helpful for the downstream task . Mostly , such a model is tuned for good reconstruction . Without the information from the RL model , such a representation does not reflect what is important for the downstream task . As a consequence , the VAE might omit learning features that are crucial for good performance on the task because they appear negligible with respect to reconstruction ( Goodfellow et al . ( 2016 ) , Chapter 15 , Figure 15.5 ) . For example , small objects in pixel-space are ignored as they affect a reconstruction based loss only marginally . Thus , any downstream task using such a representation will have no access to information about such objects . A good example for such a task is Atari Breakout , a common RL benchmark . Figures 1a and 1b show an original Breakout frame and its reconstruction . While the original frame contains the ball in the lower right hand corner , this crucial feature is missing completely in the reconstruction . We approach this issue through simultaneously learning representation and RL task , that is by combining the training of both models . As an advantage , this abolishes the need of collecting data before knowing the environment as it combines VAE and RL objectives . In consequence the VAE has an incentive to represent features that are relevant to the RL model . The main contributions of this paper are as follows : First we show that combined learning is possible and that it yields good performing policies . Second , we show that jointly trained representations incorporate additional , task-specific information which allows a RL agent to achieve higher rewards then if it was trained on a static representation . This will be shown indirectly by comparing achieved rewards as well as directly through an analysis of the trained model and its representation . 2 RELATED WORK . Lange & Riedmiller ( 2010 ) explored Auto Encoder ( AE ) ( Lecun ( 1987 ) ; Bourlard & Kamp ( 1988 ) ; Hinton & Zemel ( 1994 ) ) as a possible pre-processor for RL algorithms . The main focus in their work was finding good representations for high dimensional state spaces that enables policy learning . As input , rendered images from the commonly used grid world environment were used . The agent had to manoeuvre through a discretized map using one of four discrete movement actions per timestep . It received a positive reward once reaching the goal tile and negative rewards elsewhere . The AE bottleneck consisted only of two neurons , which corresponds to the dimensionality of the environemnt ’ s state . Fitted Q-Iteration ( FQI ) ( Ernst et al . ( 2005 ) ) was used to estimate the Q-function , which the agent then acted -greedy upon . Besides RL , they also used the learned representation to classify the agents position given an encoding using a Multi-Layer Perceptron ( MLP ) ( Rumelhart et al . ( 1985 ) ) . For these experiments , they found that adapting the encoder using MLP gradients lead to an accuracy of 99.46 % . However , they did not apply this approach to their RL task . A compelling example for separate training of meaningful representation is provided by Higgins et al . ( 2017b ) who proposed a framework called DARLA . They trained RL agents on the encoding of a β-VAE ( Higgins et al . ( 2016 ) ; Higgins et al . ( 2017a ) ) with the goal of zero-shot domain transfer . In their approach , β-VAE and agent were trained separately on a source domain and then evaluated in a target domain . Importantly , source and target domain are similar to a certain extent and only differ in some features , e.g . a blue object in the source domain might be red in the target domain . During training of the β-VAE , the pixel-based reconstruction loss was replaced with a loss calculated in the latent space of a Denoising Auto Encoder ( DAE ) ( Vincent et al . ( 2008 ) ) . Thereby their approach avoids missing task relevant feature encodings at the cost of training another model . For one of their evaluation models , they allowed the RL gradients to adapt the encoder . Their results show that subsequent encoder learning improves performance of Deep Q-Learning ( DQN ) but decreases performance of Asynchronous Advantage Actor-Critic ( A3C ) ( Mnih et al . ( 2016 ) ) . Ha & Schmidhuber ( 2018 ) proposed a combination of VAE , Recurrent Neural Networks ( RNN ) ( Hochreiter & Schmidhuber ( 1997 ) ) and a simple policy as a controller . They hypothesized that by learning a good representation of the environment and having the ability to predict future states , learning the policy itself becomes a trivial task . Like in most other models , the VAE was pre-trained on data collected by a random policy . Only the RNN and the controller were trained online . The compressed representation from the VAE was passed into a RNN in order to estimate a probability density for the subsequent state . The controller was deliberately chosen as a single linear layer and could thus be optimized with Covariance Matrix Adaptation - Evolution Strategy ( CMA-ES ) ( Hansen ( 2006 ) ) . This work demonstrated how a VAE can provide a versatile representation that can be utilized in reinforcement learning . In addition , such an approach allows to predict the subsequent encoded state . While these findings encourage the usage of VAE in conjunction with RL , this is only possible in environments where the state space can be explored sufficiently by a random policy . However , if the policy can only discover important features after acquiring a minimal level of skill , sampling the state space using a random policy will not yield high-performing agents . Learning such features would only be possible if the VAE is continuously improved during policy training . Another interesting combination of VAEs and RL was recently proposed by Yang et al . ( 2019 ) , with their so called Action-Conditional Variational Auto-Encoder ( AC-VAE ) . Their motivation for creating this model was to train a transparent , interpretable policy network . Usually , the β-VAEs decoder is trained to reconstruct the input based on the representation the encoder produced . In this work though , the decoders objective was to predict the subsequent state st+1 . As input it got the latent space vector z combined with an action-mapping-vector , which is the action vector at with a zero-padding to match the latent spaces dimensionality . Inspecting the decoder estimates for st+1 when varying one dimension of the latent space showed , that each dimension encoded a possible subsequent state that is likely to be encountered if the corresponding action from this dimension was taken . Unfortunately , the authors did not report any rewards they achieved on Breakout , hence it was not possible for us to compare model performances . 3 COMBINATION OF REINFORCEMENT AND REPRESENTATION LEARNING OBJECTIVES . In this section , we will first revisit the fundamentals of RL and VAEs and discuss their different objective functions . Then , we propose a joint objective function that allows for joint training of both models using gradient descent based learning methods . 3.1 REINFORCEMENT LEARNING WITH POLICY OPTIMIZATION . RL tries to optimize a Markov Decision Process ( MDP ) ( Bellman ( 1957 ) ) that is given by the tuple 〈S , A , r , p , γ〉 . S denotes the state space , A the action space and p : S × R × S × A → [ 0 , 1 ] the environment ’ s dynamics function that , provided a state-action pair , gives the state distribution for the successor state . r : S × A → R is the reward and γ ∈ [ 0 , 1 ) the scalar discount factor . The policy πθ ( a|s ) is a stochastic function that gives a probability distribution over actions for state s. θ denotes the policy ’ s parameter vector which is typically subject to optimization . A trajectory τ = ( s0 , a0 , ... , sT , aT ) consisting of an alternating sequence of states and actions can be sampled in the environment , where T stands for the final timestep of the trajectory and ai ∼ πθ ( ai|si ) . The overarching goal of RL is to find a policy that maximizes the average collected reward over all trajectories . This can be expressed as the optimization problem maxEτ∼p ( τ ) [ ∑ t r ( s , a ) ] , which can also be written in terms of an optimal policy parameter vector θ∗ = argmaxθ Eτ∼p ( τ ) [ ∑ t r ( s , a ) ] . When trying to optimize the policy directly be searching for θ∗ , policy optimization algorithms like A3C , Actor-Critic with Experience Replay ( ACER ) ( Wang et al . ( 2016a ) ) , Trust Region Policy Optimization ( TRPO ) ( Schulman et al . ( 2015a ) ) or Proximal Policy Optimization ( PPO ) ( Schulman et al . ( 2017 ) ) are commonly used . The fundamental idea behind policy optimization techniques is to calculate gradients of the RL objective with respect to the policy parameters : ∇θJ ( θ ) = E τ∼p ( τ ) [ ∇θ logπθ ( τ ) r ( τ ) ] ( 1 ) where we defined ∑T t=0 r ( s , a ) = r ( τ ) for brevity . However , most policy optimization methods introduce heavy modifications to this vanilla gradient in order to achieve more stable policy updates . Throughout our work , we have used PPO as RL algorithm because it is quite sample efficient and usually produces stable policy updates . For an in-depth description of PPO , we refer to our A.1 or the original work Schulman et al . ( 2017 ) .
The paper focused on the issue of learning a policy for a given task using the learned representations a pre-trained VAE. The authors visualize that using a learned latent space of a pre-trained VAE is not good enough for learning policies and propose a solution for this problem: back-propagate gradient policies through the VAE encoder. The authors proposed two versions on this method, one with pre-training and one fully online.
SP:8a9e9fec36d06b122a226ccac91a869963b148b2
Benefits of Assistance over Reward Learning
1 INTRODUCTION . Traditional computer programs are instructions on how to perform a particular task . However , we do not know how to mechanically perform more challenging tasks like translation . The field of artificial intelligence raises the level of abstraction so that we simply specify what the task is , and let the machine to figure out how to do it . As task complexity increases , even specifying the task becomes difficult . Several criteria that we might have thought were part of a specification of fairness turn out to be provably impossible to simultaneously satisfy ( Kleinberg et al. , 2016 ; Chouldechova , 2017 ; Corbett-Davies et al. , 2017 ) . Reinforcement learning agents often “ game ” their reward function by finding solutions that technically achieve high reward without doing what the designer intended ( Lehman et al. , 2018 ; Krakovna , 2018 ; Clark & Amodei , 2016 ) . In complex environments , we need to specify what not to change ( McCarthy & Hayes , 1981 ) ; failure to do so can lead to negative side effects ( Amodei et al. , 2016 ) . Powerful agents with poor specifications may pursue instrumental subgoals ( Bostrom , 2014 ; Omohundro , 2008 ) such as resisting shutdown and accumulating resources and power ( Turner , 2019 ) . A natural solution is to once again raise the level of abstraction , and create an agent that is uncertain about the objective and infers it from human feedback , rather than directly specifying some particular task ( s ) . Rather than using the current model of intelligent agents optimizing for their objectives , we would now have beneficial agents optimizing for our objectives ( Russell , 2019 ) . Reward learning ( Leike et al. , 2018 ; Jeon et al. , 2020 ; Christiano et al. , 2017 ; Ziebart et al. , 2010 ) attempts to instantiate this by learning a reward model from human feedback , and then using a control algorithm to optimize the learned reward . Crucially , the control algorithm does not reason about the effects of the chosen actions on the reward learning process , which is external to the environment . In contrast , in the assistance paradigm ( Hadfield-Menell et al. , 2016 ; Fern et al. , 2014 ) , the human H is modeled as part of the environment and as having some latent goal that the agent R ( for robot ) does not know . R ’ s goal is to maximize this ( unknown ) human goal . In this formulation , R must balance between actions that help learn about the unknown goal , and control actions that lead to high reward . Our key insight is that by integrating reward learning and control modules , assistive agents can take into account the reward learning process when selecting actions . This gives assistive agents a significant advantage over reward learning agents , which can not perform similar reasoning . The goal of this paper is to clarify and illustrate this advantage . We first precisely characterize the differences between reward learning and assistance , by showing that two phase , communicative assistance is equivalent to reward learning ( Section 3 ) . We then give qualitative examples of desirable behaviors that can only be expressed once these restrictions are lifted , and thus are only exhibited by assistive agents ( Section 4 ) . Consider for example the kitchen environment illustrated in Figure 1 , in which R must bake a pie for H . R is uncertain about which type of pie H prefers to have , and currently H is at work and can not answer R ’ s questions . An assistive R can make the pie crust , but wait to ask H about her preferences over the filling ( Section 4.1 ) . R may never clarify all of H ’ s preferences : for example , R only needs to know how to dispose of food if it turns out that the ingredients have gone bad ( Section 4.2 ) . If H will help with making the pie , R can allow H to disambiguate her desired pie by watching what filling she chooses ( Section 4.3 ) . Vanilla reward learning agents do not show these behaviors . We do not mean to suggest that all work on reward learning should cease and only research on assistive agents should be pursued . Amongst other limitations , assistive agents are very computationally complex . Our goal is simply to clarify what qualitative benefits an assistive formulation could theoretically provide . Further research is needed to develop efficient algorithms that can capture these benefits . Such algorithms may look like algorithms designed to solve assistance problems as we have formalized them here , but they may also look like modified variants of reward learning , where the modifications are designed to provide the qualitative benefits we identify . 2 BACKGROUND AND RELATED WORK . We introduce the key ideas behind reward learning and assistance . X∗ denotes a sequence of X . We use parametric specifications for ease of exposition , but our results apply more generally . 2.1 POMDPS . A partially observable Markov decision process ( POMDP ) M = 〈S , A , Ω , O , T , r , P0 , γ〉 consists of a finite state space S , a finite action space A , a finite observation space Ω , an observation function O : S → ∆ ( Ω ) ( where ∆ ( X ) is the set of probability distributions over X ) , a transition function T : S × A → ∆ ( S ) , a reward function r : S × A × S → R , an initial state distribution P0 : ∆ ( S ) , and a discount rate γ ∈ ( 0 , 1 ) . We will write ot to signify the tth observation O ( st ) . A solution to the POMDP is given by a policy π : ( O×A ) ∗×O → ∆ ( A ) that maximizes the expected sum of rewards ER ( π ) = Es0∼P0 , at∼π ( ·|o0 : t , a0 : t−1 ) , st+1∼T ( ·|st , at ) [ ∑∞ t=0 γ tr ( st , at , st+1 ) ] . 2.2 REWARD LEARNING . We consider two variants of reward learning : non-active reward learning , in which R must infer the reward by observing H ’ s behavior , and active reward learning , in which R may choose particular questions to ask H in order to get particular feedback . A non-active reward learning problem P = 〈M\r , C , 〈Θ , rθ , PΘ〉 , πH , k〉 contains a POMDP without rewardM\r = 〈S , AR , ΩR , OR , T , P0 , γ〉 , and instead R has access to a parameterized reward space 〈Θ , rθ , PΘ〉 . R is able to learn about θ∗ by observing H make k different choices c , each chosen from a set of potential choices C. In order for R to learn from the human ’ s choices , it also assumes access to the human decision function πH ( c | θ ) that determines how the human makes choices for different possible reward functions rθ . Common decision functions include perfect optimality ( Ng & Russell , 2000 ) and Boltzmann rationality ( Ziebart et al. , 2010 ) . There are many types of choices ( Jeon et al. , 2020 ) , including demonstrations ( Argall et al. , 2009 ; Ng & Russell , 2000 ; Ziebart et al. , 2010 ; Fu et al. , 2017 ; Gao et al. , 2012 ) , comparisons ( Zhang et al. , 2017 ; Wirth et al. , 2017 ; Christiano et al. , 2017 ; Sadigh et al. , 2017 ) , corrections ( Bajcsy et al. , 2017 ) , the state of the world ( Shah et al. , 2019 ) , proxy rewards ( Hadfield-Menell et al. , 2017b ) , natural language ( Fu et al. , 2019 ) , etc . A policy decision function f ( c0 : k−1 ) produces a policy πR after observing H ’ s choices . A solution is a policy decision function f that maximizes expected reward Eθ∼PΘ , c0 : k−1∼πH [ ER ( f ( c0 : k−1 ) ) ] . Since H ’ s choices c0 : k−1 do not affect the state of the environment that R is acting in , this is equivalent to choosing πR that maximizes expected reward given the posterior over reward functions , that is Eθ∼P ( θ|c0 : k−1 ) [ ER ( πR ) ] . An active reward learning problem P = 〈M\r , Q , C , 〈Θ , rθ , PΘ〉 , πH , k〉 adds the ability forR to ask H particular questions q ∈ Q in order to get more targeted feedback about θ . The human decision function πH ( c | q , θ ) now depends on the question asked . A solution consists of a question policy πRQ ( qi | q0 : i−1 , c0 : i−1 ) and a policy decision function f ( q0 : k−1 , c0 : k−1 ) that maximize expected reward Eθ∼PΘ , q0 : k−1∼πRQ , c0 : k−1∼πH [ ER ( f ( q0 : k−1 , c0 : k−1 ) ) ] . A typical algorithm ( Eric et al. , 2008 ; Daniel et al. , 2014 ; Maystre & Grossglauser , 2017 ; Christiano et al. , 2017 ; Sadigh et al. , 2017 ; Zhang et al. , 2017 ; Wilde et al. , 2020 ) will compute and ask q ∈ Q that maximizes an active learning criterion such as information gain ( Bıyık et al. , 2019 ) or volume removal ( Sadigh et al. , 2017 ) . Best results are achieved by selecting questions with the highest value of information ( Cohn , Robert W , 2016 ; Zhang et al. , 2017 ; Mindermann et al. , 2018 ; Wilde et al. , 2020 ) , but these are usually much more computationally expensive . R then finds a policy that maximizes expected reward under the inferred distribution over θ , in order to approximately solve the original POMDP . Note that a non-active reward learning problem is equivalent to an active reward learning problem with only one question , since having just a single question means that R has no choice in what feedback to get ( see Appendix A.1 for proofs ) . 2.3 ASSISTANCE . The key idea of assistance is that helpful behaviors like reward learning are incentivized when R does not know the true reward r and can only learn about it by observing human behavior . So , we model the human H as part of the environment , leading to a two-agent POMDP , and assume there is some true reward r that only H has access to , while the robot R only has access to a model relating r to H ’ s behavior . Intuitively , as R acts in the environment , it will also observe H ’ s behavior , which it can use to make inferences about the true reward . Following Hadfield-Menell et al . ( 2016 ) 1 , we define an assistance gameM as a tuple M = 〈S , { AH , AR } , { ΩH , ΩR } , { OH , OR } , T , PS , γ , 〈Θ , rθ , PΘ〉〉 . Here S is a finite set of states , AH a finite set of actions for H , ΩH a finite set of observations for H , and OH : S → ∆ ( ΩH ) an observation function for H ( respectively AR , ΩR , OR for R ) . The transition function T : S×AH ×AR → ∆ ( S ) gives the probability over next states given the current state and both actions . The initial state is sampled from PS ∈ ∆ ( S ) . Θ is a set of possible reward function parameters θ which parameterize a class of reward functions rθ : S ×AH ×AR × S → R , and Pθ is the distribution from which θ is sampled . γ ∈ ( 0 , 1 ) is a discount factor . As with POMDPs , policies can depend on history . Both H and R are able to observe each other ’ s actions , and on a given timestep , R acts before H . We use τRt : ( Ω R × AH × AR ) t to denote 1Relative to Hadfield-Menell et al . ( 2016 ) , our definition allows for partial observability and requires that the initial distribution over S and Θ be independent . We also have H choose her action sequentially after R , rather than simultaneously with R , in order to better parallel the reward learning setting . R ’ s observations until time t , and τHt for H ’ s observations ; thus R ’ s policy can be written as πR ( aR | oRt , τRt−1 ) , while H ’ s can be written as πH ( aH | oHt , aRt τHt−1 , θ ) . Note that unlike H , R does not observe the reward parameter θ , and must infer θ much like it does the hidden state . A fully observable assistance game is one in which both H and R can observe the full state . In such cases , we omit ΩH , ΩR , OH and OR . Since we have not yet specified how H behaves , it is not clear what the agent should optimize for . Should it be playing a Nash strategy or optimal strategy pair of the game , and if so , which one ? Should it use a non-equilibrium policy , since humans likely do not use equilibrium strategies ? This is a key hyperparameter in assistance games , as it determines the communication protocol for H and R. For maximum generality , we can equip the assistance game with a policy-conditioned belief B : ΠR → ∆ ( ΠH ) over πH , which specifies how the human responds to the agent ’ s choice of policy ( Halpern & Pass , 2018 ) . The agent ’ s goal is to maximize expected reward given this belief . Prior work on assistance games ( Hadfield-Menell et al. , 2016 ; Malik et al. , 2018 ; Woodward et al. , 2019 ) focuses on finding optimal strategy pairs . This corresponds to a belief thatH will know and perfectly respond toR ’ s policy ( see Appendix A.3 ) . However , our goal is to compare assistance to reward learning . Typical reward learning algorithms assume access to a model of human decision-making : for example , H might be modeled as optimal ( Ng & Russell , 2000 ) or Boltzmann-rational ( Ziebart et al. , 2010 ) . As a result , we also assume that we have access to a model of human decision-making πH . Note that πH depends on θ : we are effectively assuming that we know how H chooses how to behave given a particular reward rθ . This assumption corresponds to the policy-conditioned belief B ( πR ) ( π̃ H ) = 1 [ π̃H = πH ] . We define an assistance problem P as a pair 〈M , πH〉 where πH is a human policy for the assistance gameM . Given an assistance problem , a robot policy πR induces a probability distribution over trajectories : τ ∼ 〈s0 , θ , πH , πR〉 , τ ∈ [ S ×AH ×AR ] ∗ . We denote the support of this distribution by Traj ( πR ) . The expected reward of a robot policy for 〈M , πH〉 is given by ER ( πR ) = E s0∼PS , θ∼Pθ , τ∼〈s0 , θ , πH , πR〉 [ ∞∑ t=0 γtrθ ( st , a H t , a R t , st+1 ) ] . A solution of 〈M , πH〉 is a robot policy that maximizes expected reward : πR = argmax π̃R ER ( π̃R ) .
The submission provides a survey of two paradigms for ‘agents learning from human feedback.’ The two paradigms are unified under a new formalism (assistance games), which subsumes them as its special cases. Further, a taxonomy of different problems resulting from the formalism is provided (communicative games, two-phase games, etc.), along with illustrative examples of resulting agent behaviors. Based on the survey and taxonomy, the authors highlight that the assistance paradigm is more advantageous (in terms of possible behaviors that it can result in) than the reward learning paradigm.
SP:fc8a52afd27fff291c1fe55d196aa54a759dd42e
Benefits of Assistance over Reward Learning
1 INTRODUCTION . Traditional computer programs are instructions on how to perform a particular task . However , we do not know how to mechanically perform more challenging tasks like translation . The field of artificial intelligence raises the level of abstraction so that we simply specify what the task is , and let the machine to figure out how to do it . As task complexity increases , even specifying the task becomes difficult . Several criteria that we might have thought were part of a specification of fairness turn out to be provably impossible to simultaneously satisfy ( Kleinberg et al. , 2016 ; Chouldechova , 2017 ; Corbett-Davies et al. , 2017 ) . Reinforcement learning agents often “ game ” their reward function by finding solutions that technically achieve high reward without doing what the designer intended ( Lehman et al. , 2018 ; Krakovna , 2018 ; Clark & Amodei , 2016 ) . In complex environments , we need to specify what not to change ( McCarthy & Hayes , 1981 ) ; failure to do so can lead to negative side effects ( Amodei et al. , 2016 ) . Powerful agents with poor specifications may pursue instrumental subgoals ( Bostrom , 2014 ; Omohundro , 2008 ) such as resisting shutdown and accumulating resources and power ( Turner , 2019 ) . A natural solution is to once again raise the level of abstraction , and create an agent that is uncertain about the objective and infers it from human feedback , rather than directly specifying some particular task ( s ) . Rather than using the current model of intelligent agents optimizing for their objectives , we would now have beneficial agents optimizing for our objectives ( Russell , 2019 ) . Reward learning ( Leike et al. , 2018 ; Jeon et al. , 2020 ; Christiano et al. , 2017 ; Ziebart et al. , 2010 ) attempts to instantiate this by learning a reward model from human feedback , and then using a control algorithm to optimize the learned reward . Crucially , the control algorithm does not reason about the effects of the chosen actions on the reward learning process , which is external to the environment . In contrast , in the assistance paradigm ( Hadfield-Menell et al. , 2016 ; Fern et al. , 2014 ) , the human H is modeled as part of the environment and as having some latent goal that the agent R ( for robot ) does not know . R ’ s goal is to maximize this ( unknown ) human goal . In this formulation , R must balance between actions that help learn about the unknown goal , and control actions that lead to high reward . Our key insight is that by integrating reward learning and control modules , assistive agents can take into account the reward learning process when selecting actions . This gives assistive agents a significant advantage over reward learning agents , which can not perform similar reasoning . The goal of this paper is to clarify and illustrate this advantage . We first precisely characterize the differences between reward learning and assistance , by showing that two phase , communicative assistance is equivalent to reward learning ( Section 3 ) . We then give qualitative examples of desirable behaviors that can only be expressed once these restrictions are lifted , and thus are only exhibited by assistive agents ( Section 4 ) . Consider for example the kitchen environment illustrated in Figure 1 , in which R must bake a pie for H . R is uncertain about which type of pie H prefers to have , and currently H is at work and can not answer R ’ s questions . An assistive R can make the pie crust , but wait to ask H about her preferences over the filling ( Section 4.1 ) . R may never clarify all of H ’ s preferences : for example , R only needs to know how to dispose of food if it turns out that the ingredients have gone bad ( Section 4.2 ) . If H will help with making the pie , R can allow H to disambiguate her desired pie by watching what filling she chooses ( Section 4.3 ) . Vanilla reward learning agents do not show these behaviors . We do not mean to suggest that all work on reward learning should cease and only research on assistive agents should be pursued . Amongst other limitations , assistive agents are very computationally complex . Our goal is simply to clarify what qualitative benefits an assistive formulation could theoretically provide . Further research is needed to develop efficient algorithms that can capture these benefits . Such algorithms may look like algorithms designed to solve assistance problems as we have formalized them here , but they may also look like modified variants of reward learning , where the modifications are designed to provide the qualitative benefits we identify . 2 BACKGROUND AND RELATED WORK . We introduce the key ideas behind reward learning and assistance . X∗ denotes a sequence of X . We use parametric specifications for ease of exposition , but our results apply more generally . 2.1 POMDPS . A partially observable Markov decision process ( POMDP ) M = 〈S , A , Ω , O , T , r , P0 , γ〉 consists of a finite state space S , a finite action space A , a finite observation space Ω , an observation function O : S → ∆ ( Ω ) ( where ∆ ( X ) is the set of probability distributions over X ) , a transition function T : S × A → ∆ ( S ) , a reward function r : S × A × S → R , an initial state distribution P0 : ∆ ( S ) , and a discount rate γ ∈ ( 0 , 1 ) . We will write ot to signify the tth observation O ( st ) . A solution to the POMDP is given by a policy π : ( O×A ) ∗×O → ∆ ( A ) that maximizes the expected sum of rewards ER ( π ) = Es0∼P0 , at∼π ( ·|o0 : t , a0 : t−1 ) , st+1∼T ( ·|st , at ) [ ∑∞ t=0 γ tr ( st , at , st+1 ) ] . 2.2 REWARD LEARNING . We consider two variants of reward learning : non-active reward learning , in which R must infer the reward by observing H ’ s behavior , and active reward learning , in which R may choose particular questions to ask H in order to get particular feedback . A non-active reward learning problem P = 〈M\r , C , 〈Θ , rθ , PΘ〉 , πH , k〉 contains a POMDP without rewardM\r = 〈S , AR , ΩR , OR , T , P0 , γ〉 , and instead R has access to a parameterized reward space 〈Θ , rθ , PΘ〉 . R is able to learn about θ∗ by observing H make k different choices c , each chosen from a set of potential choices C. In order for R to learn from the human ’ s choices , it also assumes access to the human decision function πH ( c | θ ) that determines how the human makes choices for different possible reward functions rθ . Common decision functions include perfect optimality ( Ng & Russell , 2000 ) and Boltzmann rationality ( Ziebart et al. , 2010 ) . There are many types of choices ( Jeon et al. , 2020 ) , including demonstrations ( Argall et al. , 2009 ; Ng & Russell , 2000 ; Ziebart et al. , 2010 ; Fu et al. , 2017 ; Gao et al. , 2012 ) , comparisons ( Zhang et al. , 2017 ; Wirth et al. , 2017 ; Christiano et al. , 2017 ; Sadigh et al. , 2017 ) , corrections ( Bajcsy et al. , 2017 ) , the state of the world ( Shah et al. , 2019 ) , proxy rewards ( Hadfield-Menell et al. , 2017b ) , natural language ( Fu et al. , 2019 ) , etc . A policy decision function f ( c0 : k−1 ) produces a policy πR after observing H ’ s choices . A solution is a policy decision function f that maximizes expected reward Eθ∼PΘ , c0 : k−1∼πH [ ER ( f ( c0 : k−1 ) ) ] . Since H ’ s choices c0 : k−1 do not affect the state of the environment that R is acting in , this is equivalent to choosing πR that maximizes expected reward given the posterior over reward functions , that is Eθ∼P ( θ|c0 : k−1 ) [ ER ( πR ) ] . An active reward learning problem P = 〈M\r , Q , C , 〈Θ , rθ , PΘ〉 , πH , k〉 adds the ability forR to ask H particular questions q ∈ Q in order to get more targeted feedback about θ . The human decision function πH ( c | q , θ ) now depends on the question asked . A solution consists of a question policy πRQ ( qi | q0 : i−1 , c0 : i−1 ) and a policy decision function f ( q0 : k−1 , c0 : k−1 ) that maximize expected reward Eθ∼PΘ , q0 : k−1∼πRQ , c0 : k−1∼πH [ ER ( f ( q0 : k−1 , c0 : k−1 ) ) ] . A typical algorithm ( Eric et al. , 2008 ; Daniel et al. , 2014 ; Maystre & Grossglauser , 2017 ; Christiano et al. , 2017 ; Sadigh et al. , 2017 ; Zhang et al. , 2017 ; Wilde et al. , 2020 ) will compute and ask q ∈ Q that maximizes an active learning criterion such as information gain ( Bıyık et al. , 2019 ) or volume removal ( Sadigh et al. , 2017 ) . Best results are achieved by selecting questions with the highest value of information ( Cohn , Robert W , 2016 ; Zhang et al. , 2017 ; Mindermann et al. , 2018 ; Wilde et al. , 2020 ) , but these are usually much more computationally expensive . R then finds a policy that maximizes expected reward under the inferred distribution over θ , in order to approximately solve the original POMDP . Note that a non-active reward learning problem is equivalent to an active reward learning problem with only one question , since having just a single question means that R has no choice in what feedback to get ( see Appendix A.1 for proofs ) . 2.3 ASSISTANCE . The key idea of assistance is that helpful behaviors like reward learning are incentivized when R does not know the true reward r and can only learn about it by observing human behavior . So , we model the human H as part of the environment , leading to a two-agent POMDP , and assume there is some true reward r that only H has access to , while the robot R only has access to a model relating r to H ’ s behavior . Intuitively , as R acts in the environment , it will also observe H ’ s behavior , which it can use to make inferences about the true reward . Following Hadfield-Menell et al . ( 2016 ) 1 , we define an assistance gameM as a tuple M = 〈S , { AH , AR } , { ΩH , ΩR } , { OH , OR } , T , PS , γ , 〈Θ , rθ , PΘ〉〉 . Here S is a finite set of states , AH a finite set of actions for H , ΩH a finite set of observations for H , and OH : S → ∆ ( ΩH ) an observation function for H ( respectively AR , ΩR , OR for R ) . The transition function T : S×AH ×AR → ∆ ( S ) gives the probability over next states given the current state and both actions . The initial state is sampled from PS ∈ ∆ ( S ) . Θ is a set of possible reward function parameters θ which parameterize a class of reward functions rθ : S ×AH ×AR × S → R , and Pθ is the distribution from which θ is sampled . γ ∈ ( 0 , 1 ) is a discount factor . As with POMDPs , policies can depend on history . Both H and R are able to observe each other ’ s actions , and on a given timestep , R acts before H . We use τRt : ( Ω R × AH × AR ) t to denote 1Relative to Hadfield-Menell et al . ( 2016 ) , our definition allows for partial observability and requires that the initial distribution over S and Θ be independent . We also have H choose her action sequentially after R , rather than simultaneously with R , in order to better parallel the reward learning setting . R ’ s observations until time t , and τHt for H ’ s observations ; thus R ’ s policy can be written as πR ( aR | oRt , τRt−1 ) , while H ’ s can be written as πH ( aH | oHt , aRt τHt−1 , θ ) . Note that unlike H , R does not observe the reward parameter θ , and must infer θ much like it does the hidden state . A fully observable assistance game is one in which both H and R can observe the full state . In such cases , we omit ΩH , ΩR , OH and OR . Since we have not yet specified how H behaves , it is not clear what the agent should optimize for . Should it be playing a Nash strategy or optimal strategy pair of the game , and if so , which one ? Should it use a non-equilibrium policy , since humans likely do not use equilibrium strategies ? This is a key hyperparameter in assistance games , as it determines the communication protocol for H and R. For maximum generality , we can equip the assistance game with a policy-conditioned belief B : ΠR → ∆ ( ΠH ) over πH , which specifies how the human responds to the agent ’ s choice of policy ( Halpern & Pass , 2018 ) . The agent ’ s goal is to maximize expected reward given this belief . Prior work on assistance games ( Hadfield-Menell et al. , 2016 ; Malik et al. , 2018 ; Woodward et al. , 2019 ) focuses on finding optimal strategy pairs . This corresponds to a belief thatH will know and perfectly respond toR ’ s policy ( see Appendix A.3 ) . However , our goal is to compare assistance to reward learning . Typical reward learning algorithms assume access to a model of human decision-making : for example , H might be modeled as optimal ( Ng & Russell , 2000 ) or Boltzmann-rational ( Ziebart et al. , 2010 ) . As a result , we also assume that we have access to a model of human decision-making πH . Note that πH depends on θ : we are effectively assuming that we know how H chooses how to behave given a particular reward rθ . This assumption corresponds to the policy-conditioned belief B ( πR ) ( π̃ H ) = 1 [ π̃H = πH ] . We define an assistance problem P as a pair 〈M , πH〉 where πH is a human policy for the assistance gameM . Given an assistance problem , a robot policy πR induces a probability distribution over trajectories : τ ∼ 〈s0 , θ , πH , πR〉 , τ ∈ [ S ×AH ×AR ] ∗ . We denote the support of this distribution by Traj ( πR ) . The expected reward of a robot policy for 〈M , πH〉 is given by ER ( πR ) = E s0∼PS , θ∼Pθ , τ∼〈s0 , θ , πH , πR〉 [ ∞∑ t=0 γtrθ ( st , a H t , a R t , st+1 ) ] . A solution of 〈M , πH〉 is a robot policy that maximizes expected reward : πR = argmax π̃R ER ( π̃R ) .
This work proposes learning a single control policy for human-in-the-loop learning rather than having a reward learning component and a control component. The key difference is that the action selection can use information from the reward learning module. The authors formulate an assistance game in this setting and show that it can reduce to an equivalent POMDP. The work then describes a communicative assistance problem and shows the equivalence of reward learning to assistance and visa versa. Results show qualitative improvements on variants of the kitchen domain.
SP:fc8a52afd27fff291c1fe55d196aa54a759dd42e
Interactive Weak Supervision: Learning Useful Heuristics for Data Labeling
1 INTRODUCTION . The performance of supervised machine learning ( ML ) hinges on the availability of labeled data in sufficient quantity and quality . However , labeled data for applications of ML can be scarce , and the common process of obtaining labels by having annotators inspect individual samples is often expensive and time consuming . Additionally , this cost is frequently exacerbated by factors such as privacy concerns , required expert knowledge , and shifting problem definitions . Weak supervision provides a promising alternative , reducing the need for humans to hand label large datasets to train ML models ( Riedel et al. , 2010 ; Hoffmann et al. , 2011 ; Ratner et al. , 2016 ; Dehghani et al. , 2018 ) . A recent approach called data programming ( Ratner et al. , 2016 ) combines multiple weak supervision sources by using an unsupervised label model to estimate the latent true class label , an idea that has close connections to modeling workers in crowd-sourcing ( Dawid & Skene , 1979 ; Karger et al. , 2011 ; Dalvi et al. , 2013 ; Zhang et al. , 2014 ) . The approach enables subject matter experts to specify labeling functions ( LFs ) —functions that encode domain knowledge and noisily annotate subsets of data , such as user-specified heuristics or external knowledge bases—instead of needing to inspect and label individual samples . These weak supervision approaches have been used on a wide variety of data types such as MRI sequences and unstructured text , and in various domains such as healthcare and e-commerce ( Fries et al. , 2019 ; Halpern et al. , 2014 ; Bach et al. , 2019 ; Ré et al. , 2020 ) . Not only does the use of multiple sources of weak supervision provide a scalable framework for creating large labeled datasets , but it can also be viewed as a vehicle to incorporate high level , conceptual feedback into the data labeling process . In data programming , each LF is an imperfect but reasonably accurate heuristic , such as a pre-trained classifier or keyword lookup . For example , for the popular 20 newsgroups dataset , an LF to identify the class ‘ sci.space ’ may look for the token ‘ launch ’ in documents and would be right about 70 % of the time . While data programming can be very effective when done right , experts may spend a significant amount of time designing the weak supervision sources ( Varma & Ré , 2018 ) and must often inspect samples at random to generate ideas ( Cohen-Wang et al. , 2019 ) . In our 20 newsgroups example , we may randomly see a document mentioning ‘ Salman Rushdie ’ and realize that the name of a famous atheist could be a good heuristic to identify posts in ‘ alt.atheism ’ . While such a heuristic seems obvious after the fact , we have to chance upon the right documents to generate these ideas . In practice , coming up with effective LFs becomes difficult after the first few . Substantial foresight ( Ramos et al. , 2020 ) is required to create a new function that applies to a non-negligible subset of given data , is novel , and adds predictive value . We propose a new approach for training supervised ML models with weak supervision through an interactive process , supporting domain experts in fast discovery of good LFs . The method queries users in an active fashion for feedback about candidate LFs , from which a model learns to identify LFs likely to have good accuracy . Upon completion , our approach produces a final set of LFs . We use this set to create an estimate of the latent class label via an unsupervised label model and train a final , weakly supervised end classifier using a noise aware loss function on the estimated labels as in Ratner et al . ( 2016 ) . The approach relies on the observation that many applications allow for heuristics of varying quality to be generated at scale ( similar to Varma & Ré ( 2018 ) ) , and that experts can provide good judgment by identifying some LFs that have reasonable accuracy . The full pipeline of the proposed approach , termed Interactive Weak Supervision ( IWS ) 1 , is illustrated in Fig . 1 . Our contributions are : 1 . We propose , to the best of our knowledge , the first interactive method for weak supervision in which queries to be annotated are not data points but labeling functions . This approach automates the discovery of useful data labeling heuristics . 2 . We conduct experiments with real users on three classification tasks , using both text and image datasets . Our results support our modeling assumptions , demonstrate competitive test set performance of the downstream end classifier , and show that users can provide accurate feedback on automatically generated LFs . 3 . In our results , IWS shows superior performance compared to standard active learning , i.e . we achieve better test set performance with a smaller number of queries to users . In text experiments with real users , IWS achieves a mean test set AUC after 200 LF annotations that requires at least three times as many active learning iterations annotating data points . In addition , the average user response time for LF queries was shorter than for the active learning queries on data points . 2 RELATED WORK . Active strategies for weak supervision sources have largely focused on combinations of data programming with traditional active learning on data points , while our work has similarities to active learning on features ( Druck et al. , 2009 ) and active learning of virtual evidence ( Lang & Poon , 2021 ) . In Nashaat et al . ( 2018 ) , a pool of samples is created on which LFs disagree , and active learning strategies are then applied to obtain labels for some of the samples . In Cohen-Wang et al . ( 2019 ) , samples where LFs abstain or disagree most are selected and presented to users in order to inspire the creation of new LFs . In Hancock et al . ( 2018 ) , natural language explanations provided during text labeling are used to generate heuristics . The proposed system uses a semantic parser to convert explanations into logical forms , which represent labeling functions . 1Code is available at https : //github.com/benbo/interactive-weak-supervision Prior work has emphasized that LFs defined by experts frequently have a recurring structure in which elements are swapped to change the higher level concept a function corresponds to ( Varma & Ré , 2018 ; Varma et al. , 2017 ; Bach et al. , 2019 ) . As an example , in tasks involving text documents , LFs often follow a repetitive structure in which key terms or phrases and syntactical relationships change , e.g . mentions of specific words ( Varma & Ré , 2018 ; Cohen-Wang et al. , 2019 ; Varma et al. , 2019 ) . Prior work relies on this observation to create heuristic generators ( Varma & Ré , 2018 ) , LF templates ( Bach et al. , 2019 ) , and domain-specific primitives ( Varma et al. , 2017 ) . In particular , in a semi-supervised data programming setting , Varma & Ré ( 2018 ) propose a system for automatic generation of labeling functions without user interaction , by using a small set of labeled data . Additional related work has investigated weak supervision for neural networks in information retrieval ( Dehghani et al. , 2017 ; Zamani et al. , 2018 ; Zamani & Croft , 2018 ) , the modeling of dependencies among heuristics in data programming ( Bach et al. , 2017 ; Varma et al. , 2019 ) , the multi-task data programming setting ( Ratner et al. , 2019 ) , handling of multi-resolution sources ( Sala et al. , 2019 ) , the use of noisy pairwise labeling functions ( Boecking & Dubrawski , 2019 ) , addressing latent subsets in the data ( Varma et al. , 2016 ) , LFs with noisy continuous scores ( Chatterjee et al. , 2020 ) , and fast model iteration via the use of pre-trained embeddings ( Chen et al. , 2020 ) . 3 METHODS . We propose an interactive weak supervision ( IWS ) approach to assist experts in finding good labeling functions ( LFs ) for training a classifier on datasets without ground truth labels . We will first describe the general problem setting of learning to classify without ground truth samples by modeling multiple weak supervision sources , as well as the concept of LF families . We then dive into the details of the proposed IWS approach . For brevity , we limit the scope of the end classifier to binary classification , but the presented background and ideas do extend to the multi-class settings . 3.1 PRELIMINARIES . Learning with Multiple Weak Supervision Sources Assume each data point x ∈ X has a latent class label y∗ ∈ Y = { −1 , 1 } . Given n unlabeled , i.i.d . datapoints X = { xi } ni=1 , our goal is to train an end classifier f : X → Y such that f ( x ) = y∗ . In data programming ( Ratner et al. , 2016 ; 2020 ) , a user provides m LFs { λj } mj=1 , where λj : X → Y ∪ { 0 } . An LF λj noisily labels the data with λj ( x ) ∈ Y or abstains with λj ( x ) = 0 . The corresponding LF output matrix is Λ ∈ { −1 , 0 , 1 } n×m , where Λi , j = λj ( xi ) . In this paper , we assume that each LF λj has the same accuracy on each class , αj = P ( λj ( x ) = y ∗|λj ( x ) 6= 0 ) , where accuracy is defined on items where j does not abstain . Further , we denote by lj = P ( λj ( x ) 6= 0 ) the LF propensity ( sometimes called LF coverage ) , i.e . the frequency at which LF j does not abstain . In data programming , an unsupervised label model pθ ( Y , Λ ) produces probabilistic estimates of the latent class labels Y ∗ = { y∗i } ni=1 using the observed LF outputs Λ by modeling the LF accuracies , propensities , and possibly their dependencies . A number of label model approaches exist in the crowdsourcing ( Dawid & Skene , 1979 ; Zhang et al. , 2014 ) and the weak supervision literature ( Ratner et al. , 2020 ) . In this paper , we use a factor graph as proposed in Ratner et al . ( 2016 ; 2020 ) to obtain probabilistic labels by modeling the LF accuracies via factor φAcci , j ( Λ , Y ) , 1 { Λij = yi } and labeling propensity by factor φLabi , j ( Λ , Y ) , 1 { Λij 6= 0 } , and for simplicity assume LFs are independent conditional on Y . The label model is defined as pθ ( Y , Λ ) , Z −1 θ exp ( n∑ i=1 θ > φi ( Λi , yi ) ) , ( 1 ) where Zθ is a normalizing constant and φi ( Λi , yi ) defined to be the concatenation of the factors for all LFs j = 1 , . . . , m for sample i . We learn θ by minimizing the negative log marginal likelihood given the observed Λ . Finally , following Ratner et al . ( 2016 ) an end classifier f is trained using probabilistic labels pθ ( Y |Λ ) . Labeling Function Families We define LF families as sets of expert-interpretable LFs described by functions zφ : X 7→ { −1 , 0 , 1 } , for parameters φ ∈ Φ . An example are shallow decision trees zφ parameterized by variables and splitting rules φ ( Varma & Ré , 2018 ) , or a function zφ defining a regular expression for two words where φ parameterizes the word choices from a vocabulary and the target label . Given such an LF family , we can generate a large set of p candidate heuristics L = { λj ( x ) = zφj ( x ) } p j=1 , where φj ∈ Φ , e.g . by sampling from Φ and pruning low coverage candidates . These families often arise naturally in the form of LFs with repetitive structure that experts write from scratch , where template variables—such as keywords—can be sampled from the unlabeled data to create candidates . For text , we can find n-grams within a document frequency range to generate key term lookups , fill placeholders in regular expressions , or generate shallow decision trees ( Ratner et al. , 2016 ; Varma & Ré , 2018 ; Varma et al. , 2019 ) . For time series , we can create a large set of LFs based on motifs ( Lonardi & Patel , 2002 ) or graphs of temporal constraints ( GuillameBert & Dubrawski , 2017 ) . For images , we can create a library of pre-trained object detectors as in Chen et al . ( 2019 ) , or in some applications combine primitives of geometric properties of the images ( Varma & Ré , 2018 ) . An LF family has to be chosen with domain expert input . Compared to standard data programming , the burden of creating LFs from scratch is shifted to choosing an appropriate LF family and then judging recommended candidates . We argue that domain experts often have the foresight to choose an LF family such that a sufficiently sized subset of LFs is predictive of the latent class label . Such LF families may not exist for all data types and classification tasks . But when they exist they offer the opportunity to quickly build large , labeled datasets . Once created , it is reasonable to expect that the same LF generation procedure can be reused for similar classification tasks without additional effort ( e.g . we use a single LF family procedure for all text datasets in our experiments ) .
This paper proposes a new approach for active learning by interactively discovering weak supervision. Instead of asking human to annotate data points, the method collects feedback about candidate label functions, from which a model learns to identify promising label functions. With the final set of label functions, they train a classifier with the estimated labels on unlabeled data. They conduct experiments on text classification datasets with both oracle and human feedback, and show a large improvement compared with traditional active learning.
SP:a0d07d2ab41a2c13a2be8f2fb99548828d6ae991
Interactive Weak Supervision: Learning Useful Heuristics for Data Labeling
1 INTRODUCTION . The performance of supervised machine learning ( ML ) hinges on the availability of labeled data in sufficient quantity and quality . However , labeled data for applications of ML can be scarce , and the common process of obtaining labels by having annotators inspect individual samples is often expensive and time consuming . Additionally , this cost is frequently exacerbated by factors such as privacy concerns , required expert knowledge , and shifting problem definitions . Weak supervision provides a promising alternative , reducing the need for humans to hand label large datasets to train ML models ( Riedel et al. , 2010 ; Hoffmann et al. , 2011 ; Ratner et al. , 2016 ; Dehghani et al. , 2018 ) . A recent approach called data programming ( Ratner et al. , 2016 ) combines multiple weak supervision sources by using an unsupervised label model to estimate the latent true class label , an idea that has close connections to modeling workers in crowd-sourcing ( Dawid & Skene , 1979 ; Karger et al. , 2011 ; Dalvi et al. , 2013 ; Zhang et al. , 2014 ) . The approach enables subject matter experts to specify labeling functions ( LFs ) —functions that encode domain knowledge and noisily annotate subsets of data , such as user-specified heuristics or external knowledge bases—instead of needing to inspect and label individual samples . These weak supervision approaches have been used on a wide variety of data types such as MRI sequences and unstructured text , and in various domains such as healthcare and e-commerce ( Fries et al. , 2019 ; Halpern et al. , 2014 ; Bach et al. , 2019 ; Ré et al. , 2020 ) . Not only does the use of multiple sources of weak supervision provide a scalable framework for creating large labeled datasets , but it can also be viewed as a vehicle to incorporate high level , conceptual feedback into the data labeling process . In data programming , each LF is an imperfect but reasonably accurate heuristic , such as a pre-trained classifier or keyword lookup . For example , for the popular 20 newsgroups dataset , an LF to identify the class ‘ sci.space ’ may look for the token ‘ launch ’ in documents and would be right about 70 % of the time . While data programming can be very effective when done right , experts may spend a significant amount of time designing the weak supervision sources ( Varma & Ré , 2018 ) and must often inspect samples at random to generate ideas ( Cohen-Wang et al. , 2019 ) . In our 20 newsgroups example , we may randomly see a document mentioning ‘ Salman Rushdie ’ and realize that the name of a famous atheist could be a good heuristic to identify posts in ‘ alt.atheism ’ . While such a heuristic seems obvious after the fact , we have to chance upon the right documents to generate these ideas . In practice , coming up with effective LFs becomes difficult after the first few . Substantial foresight ( Ramos et al. , 2020 ) is required to create a new function that applies to a non-negligible subset of given data , is novel , and adds predictive value . We propose a new approach for training supervised ML models with weak supervision through an interactive process , supporting domain experts in fast discovery of good LFs . The method queries users in an active fashion for feedback about candidate LFs , from which a model learns to identify LFs likely to have good accuracy . Upon completion , our approach produces a final set of LFs . We use this set to create an estimate of the latent class label via an unsupervised label model and train a final , weakly supervised end classifier using a noise aware loss function on the estimated labels as in Ratner et al . ( 2016 ) . The approach relies on the observation that many applications allow for heuristics of varying quality to be generated at scale ( similar to Varma & Ré ( 2018 ) ) , and that experts can provide good judgment by identifying some LFs that have reasonable accuracy . The full pipeline of the proposed approach , termed Interactive Weak Supervision ( IWS ) 1 , is illustrated in Fig . 1 . Our contributions are : 1 . We propose , to the best of our knowledge , the first interactive method for weak supervision in which queries to be annotated are not data points but labeling functions . This approach automates the discovery of useful data labeling heuristics . 2 . We conduct experiments with real users on three classification tasks , using both text and image datasets . Our results support our modeling assumptions , demonstrate competitive test set performance of the downstream end classifier , and show that users can provide accurate feedback on automatically generated LFs . 3 . In our results , IWS shows superior performance compared to standard active learning , i.e . we achieve better test set performance with a smaller number of queries to users . In text experiments with real users , IWS achieves a mean test set AUC after 200 LF annotations that requires at least three times as many active learning iterations annotating data points . In addition , the average user response time for LF queries was shorter than for the active learning queries on data points . 2 RELATED WORK . Active strategies for weak supervision sources have largely focused on combinations of data programming with traditional active learning on data points , while our work has similarities to active learning on features ( Druck et al. , 2009 ) and active learning of virtual evidence ( Lang & Poon , 2021 ) . In Nashaat et al . ( 2018 ) , a pool of samples is created on which LFs disagree , and active learning strategies are then applied to obtain labels for some of the samples . In Cohen-Wang et al . ( 2019 ) , samples where LFs abstain or disagree most are selected and presented to users in order to inspire the creation of new LFs . In Hancock et al . ( 2018 ) , natural language explanations provided during text labeling are used to generate heuristics . The proposed system uses a semantic parser to convert explanations into logical forms , which represent labeling functions . 1Code is available at https : //github.com/benbo/interactive-weak-supervision Prior work has emphasized that LFs defined by experts frequently have a recurring structure in which elements are swapped to change the higher level concept a function corresponds to ( Varma & Ré , 2018 ; Varma et al. , 2017 ; Bach et al. , 2019 ) . As an example , in tasks involving text documents , LFs often follow a repetitive structure in which key terms or phrases and syntactical relationships change , e.g . mentions of specific words ( Varma & Ré , 2018 ; Cohen-Wang et al. , 2019 ; Varma et al. , 2019 ) . Prior work relies on this observation to create heuristic generators ( Varma & Ré , 2018 ) , LF templates ( Bach et al. , 2019 ) , and domain-specific primitives ( Varma et al. , 2017 ) . In particular , in a semi-supervised data programming setting , Varma & Ré ( 2018 ) propose a system for automatic generation of labeling functions without user interaction , by using a small set of labeled data . Additional related work has investigated weak supervision for neural networks in information retrieval ( Dehghani et al. , 2017 ; Zamani et al. , 2018 ; Zamani & Croft , 2018 ) , the modeling of dependencies among heuristics in data programming ( Bach et al. , 2017 ; Varma et al. , 2019 ) , the multi-task data programming setting ( Ratner et al. , 2019 ) , handling of multi-resolution sources ( Sala et al. , 2019 ) , the use of noisy pairwise labeling functions ( Boecking & Dubrawski , 2019 ) , addressing latent subsets in the data ( Varma et al. , 2016 ) , LFs with noisy continuous scores ( Chatterjee et al. , 2020 ) , and fast model iteration via the use of pre-trained embeddings ( Chen et al. , 2020 ) . 3 METHODS . We propose an interactive weak supervision ( IWS ) approach to assist experts in finding good labeling functions ( LFs ) for training a classifier on datasets without ground truth labels . We will first describe the general problem setting of learning to classify without ground truth samples by modeling multiple weak supervision sources , as well as the concept of LF families . We then dive into the details of the proposed IWS approach . For brevity , we limit the scope of the end classifier to binary classification , but the presented background and ideas do extend to the multi-class settings . 3.1 PRELIMINARIES . Learning with Multiple Weak Supervision Sources Assume each data point x ∈ X has a latent class label y∗ ∈ Y = { −1 , 1 } . Given n unlabeled , i.i.d . datapoints X = { xi } ni=1 , our goal is to train an end classifier f : X → Y such that f ( x ) = y∗ . In data programming ( Ratner et al. , 2016 ; 2020 ) , a user provides m LFs { λj } mj=1 , where λj : X → Y ∪ { 0 } . An LF λj noisily labels the data with λj ( x ) ∈ Y or abstains with λj ( x ) = 0 . The corresponding LF output matrix is Λ ∈ { −1 , 0 , 1 } n×m , where Λi , j = λj ( xi ) . In this paper , we assume that each LF λj has the same accuracy on each class , αj = P ( λj ( x ) = y ∗|λj ( x ) 6= 0 ) , where accuracy is defined on items where j does not abstain . Further , we denote by lj = P ( λj ( x ) 6= 0 ) the LF propensity ( sometimes called LF coverage ) , i.e . the frequency at which LF j does not abstain . In data programming , an unsupervised label model pθ ( Y , Λ ) produces probabilistic estimates of the latent class labels Y ∗ = { y∗i } ni=1 using the observed LF outputs Λ by modeling the LF accuracies , propensities , and possibly their dependencies . A number of label model approaches exist in the crowdsourcing ( Dawid & Skene , 1979 ; Zhang et al. , 2014 ) and the weak supervision literature ( Ratner et al. , 2020 ) . In this paper , we use a factor graph as proposed in Ratner et al . ( 2016 ; 2020 ) to obtain probabilistic labels by modeling the LF accuracies via factor φAcci , j ( Λ , Y ) , 1 { Λij = yi } and labeling propensity by factor φLabi , j ( Λ , Y ) , 1 { Λij 6= 0 } , and for simplicity assume LFs are independent conditional on Y . The label model is defined as pθ ( Y , Λ ) , Z −1 θ exp ( n∑ i=1 θ > φi ( Λi , yi ) ) , ( 1 ) where Zθ is a normalizing constant and φi ( Λi , yi ) defined to be the concatenation of the factors for all LFs j = 1 , . . . , m for sample i . We learn θ by minimizing the negative log marginal likelihood given the observed Λ . Finally , following Ratner et al . ( 2016 ) an end classifier f is trained using probabilistic labels pθ ( Y |Λ ) . Labeling Function Families We define LF families as sets of expert-interpretable LFs described by functions zφ : X 7→ { −1 , 0 , 1 } , for parameters φ ∈ Φ . An example are shallow decision trees zφ parameterized by variables and splitting rules φ ( Varma & Ré , 2018 ) , or a function zφ defining a regular expression for two words where φ parameterizes the word choices from a vocabulary and the target label . Given such an LF family , we can generate a large set of p candidate heuristics L = { λj ( x ) = zφj ( x ) } p j=1 , where φj ∈ Φ , e.g . by sampling from Φ and pruning low coverage candidates . These families often arise naturally in the form of LFs with repetitive structure that experts write from scratch , where template variables—such as keywords—can be sampled from the unlabeled data to create candidates . For text , we can find n-grams within a document frequency range to generate key term lookups , fill placeholders in regular expressions , or generate shallow decision trees ( Ratner et al. , 2016 ; Varma & Ré , 2018 ; Varma et al. , 2019 ) . For time series , we can create a large set of LFs based on motifs ( Lonardi & Patel , 2002 ) or graphs of temporal constraints ( GuillameBert & Dubrawski , 2017 ) . For images , we can create a library of pre-trained object detectors as in Chen et al . ( 2019 ) , or in some applications combine primitives of geometric properties of the images ( Varma & Ré , 2018 ) . An LF family has to be chosen with domain expert input . Compared to standard data programming , the burden of creating LFs from scratch is shifted to choosing an appropriate LF family and then judging recommended candidates . We argue that domain experts often have the foresight to choose an LF family such that a sufficiently sized subset of LFs is predictive of the latent class label . Such LF families may not exist for all data types and classification tasks . But when they exist they offer the opportunity to quickly build large , labeled datasets . Once created , it is reasonable to expect that the same LF generation procedure can be reused for similar classification tasks without additional effort ( e.g . we use a single LF family procedure for all text datasets in our experiments ) .
This paper proposes a new framework for interactively selecting labeling heuristics in a weakly supervised setting. The main idea of the proposed approach is to combine weak supervision and active learning. Compared to the previous work which relies on human manually create labeling functions (the abstraction of the weak supervision), this work defines a family of labeling function and uses an active learning method to interactively identify a set of labeling functions that maximizes the utility based on the usefulness by the users. The experiment results showed that the proposed approach outperforms other baseline methods.
SP:a0d07d2ab41a2c13a2be8f2fb99548828d6ae991
Learning-based Support Estimation in Sublinear Time
log ( 1/ε ) · n1−Θ ( 1/ log ( 1/ε ) ) . We evaluate the proposed algorithms on a collection of data sets , using the neuralnetwork based estimators from Hsu et al , ICLR ’ 19 as predictors . Our experiments demonstrate substantial ( up to 3x ) improvements in the estimation accuracy compared to the state of the art algorithm . 1 INTRODUCTION . Estimating the support size of a distribution from random samples is a fundamental problem with applications in many domains . In biology , it is used to estimate the number of distinct species from experiments ( Fisher et al. , 1943 ) ; in genomics to estimate the number of distinct protein encoding regions ( Zou et al. , 2016 ) ; in computer systems to approximate the number of distinct blocks on a disk drive ( Harnik et al. , 2016 ) , etc . The problem has also applications in linguistics , query optimization in databases , and other fields . Because of its wide applicability , the problem has received plenty of attention in multiple fields1 , including statistics and theoretical computer science , starting with the seminal works of Good and Turing Good ( 1953 ) and Fisher et al . ( 1943 ) . A more recent line of research pursued over the last decade ( Raskhodnikova et al. , 2009 ; Valiant & Valiant , 2011 ; 2013 ; Wu & Yang , 2019 ) focused on the following formulation of the problem : given access to independent samples from a distribution ∗Authors listed in alphabetical order 1A partial bibliography from 2007 contains over 900 references . It is available at https : //courses.cit.cornell.edu/jab18/bibliography.html . P over a discrete domain { 0 , . . . , n − 1 } whose minimum non-zero mass2 is at least 1/n , estimate the support size of P up to ±εn . The state of the art estimator , due to Valiant & Valiant ( 2011 ) ; Wu & Yang ( 2019 ) , solves this problem using only O ( n/ log n ) samples ( for a constant ε ) . Both papers also show that this bound is tight . A more straightforward linear-time algorithm exists , which reports the number of distinct elements seen in a sample of size N = O ( n log ε−1 ) ( which is O ( n ) for constant ε ) , without accounting for the unseen items . This algorithm succeeds because each element i with non-zero mass ( and thus mass at least 1/n ) appears in the sample with probability at least 1 − ( 1 − 1/n ) N > 1 − ε , so in expectation , at most ε · n elements with non-zero mass will not appear in the sample . Thus , in general , the number of samples required by the best possible algorithm ( i.e. , n/ log n ) is only logarithmically smaller than the complexity of the straightforward linear-time algorithm . A natural approach to improve over this bound is to leverage the fact that in many applications , the input distribution is not entirely unknown . Indeed , one can often obtain rough approximations of the element frequencies by analyzing different but related distributions . For example , in genomics , frequency estimates can be obtained from the frequencies of genome regions of different species ; in linguistics they can be inferred from the statistical properties of the language ( e.g. , long words are rare ) , or from a corpus of writings of a different but related author , etc . More generally , such estimates can be learned using modern machine learning techniques , given the true element frequencies in related data sets . The question then becomes whether one can utilize such predictors for use in support size estimation procedures in order to improve the estimation accuracy . Our results In this paper we initiate the study of such “ learning-based ” methods for support size estimation . Our contributions are both theoretical and empirical . On the theory side , we show that given a “ good enough ” predictor of the distribution P , one can solve the problem using much fewer than n/ log n samples . Specifically , suppose that in the input distribution P the probability of element i is pi , and that we have access to a predictor Π ( i ) such that Π ( i ) ≤ pi ≤ b ·Π ( i ) for some constant approximation factor b ≥ 1.3 Then we give an algorithm that estimates the support size up to ±εn using only log ( 1/ε ) · n1−Θ ( 1/ log ( 1/ε ) ) samples , assuming the approximation factor b is a constant ( see Theorem 1 for a more detailed bound ) . This improves over the bound of Wu & Yang ( 2019 ) for any fixed values of the accuracy parameter ε and predictor quality factor b . Furthermore , we show that this bound is almost tight . Our algorithm is presented in Algorithm 1 . On a high level , it partitions the range of probability values into geometrically increasing intervals . We then use the predictor to assign the elements observed in the sample to these intervals , and produce a Wu-Yang-like estimate within each interval . Specifically , our estimator is based on Chebyshev polynomials ( as in Valiant & Valiant ( 2011 ) ; Wu & Yang ( 2019 ) ) , but the finer partitioning into intervals allows us to use polynomials with different , carefully chosen parameters . This leads to significantly improved sample complexity if the predictor is sufficiently accurate . On the empirical side , we evaluate the proposed algorithms on a collection of real and synthetic data sets . For the real data sets ( network traffic data and AOL query log data ) we use neural-network based predictors from Hsu et al . ( 2019 ) . Although those predictors do not always approximate the true distribution probabilities up to a small factor , our experiments nevertheless demonstrate that the new algorithm offers substantial improvements ( up to 3x reduction in relative error ) in the estimation accuracy compared to the state of the art algorithm of Wu & Yang ( 2019 ) . 1.1 RELATED WORK . Estimating support size As described in the introduction , the problem has been studied extensively in statistics and theoretical computer science . The best known algorithm , due to Wu & Yang 2This constraint is naturally satisfied e.g. , if the distribution P is an empirical distribution over a data set of n items . In fact , in this case all probabilities are multiples of 1/n so the support size is equal to the number of distinct elements in the data set . 3Our results hold without change if we modify the assumption to r ·Π ( i ) ≤ pi ≤ r · b ·Π ( i ) , for any r > 0 . We use r = 1 for simplicity . ( 2019 ) , uses O ( log2 ( 1/ε ) · n/ log n ) samples . Because of the inherent limitations of the model that uses only random samples , Canonne & Rubinfeld ( 2014 ) considered an augmented model where an algorithm has access to the exact probability of any sampled item . The authors show that this augmentation is very powerful , reducing the sampling complexity to only O ( 1/ε2 ) . More recently , Onak & Sun ( 2018 ) proved that Canonne and Rubinfeld ’ s algorithm works as long as the probabilities accessed are accurate up to a ( 1 ± ε3 ) -multiplicative factor . However , this algorithm strongly relies on the probabilities being extremely accurate , and the predicted probabilities even being off by a small constant factor can cause the support size estimate to become massively incorrect . As a result , their algorithm is not robust to mispredicted probabilities , as our experiments show . A different line of research studied streaming algorithms for estimating the number of distinct elements . Such algorithms have access to the whole data set , but must read it in a single pass using limited memory . The best known algorithms for this problem compute a ( 1 + ε ) -approximate estimation to the number of distinct elements using O ( 1/ε2 + log n ) bits of storage ( Kane et al. , 2010 ) . See the discussion in that paper for a history of the problem and further references . Learning-based algorithms Over the last few years , there has been a growing interest in using machine learning techniques to improve the performance of “ classical ” algorithms . This methodology found applications in similarity search ( Wang et al. , 2016 ; Sablayrolles et al. , 2019 ; Dong et al. , 2020 ) , graph optimization ( Khalil et al. , 2017 ; Balcan et al. , 2018 ) , data structures ( Kraska et al. , 2018 ; Mitzenmacher , 2018 ) , online algorithms ( Lykouris & Vassilvitskii , 2018 ; Purohit et al. , 2018 ) , compressed sensing ( Mousavi et al. , 2015 ; Baldassarre et al. , 2016 ; Bora et al. , 2017 ) and streaming algorithms ( Hsu et al. , 2019 ; Jiang et al. , 2019 ) . The last two papers are closest to our work , as they solve various computational problems over data streams , including distinct elements estimation in Jiang et al . ( 2019 ) using frequency predictors . Furthermore , in our experiments we are using the neural-network-based predictors developed in Hsu et al . ( 2019 ) . However , our algorithm operates in a fundamentally different model , using a sublinear ( in n ) number of samples of the input , as opposed to accessing the full input via a linear scan . Thus , our algorithms run in sublinear time , in contrast to streaming algorithms that use sublinear space . Distribution property testing This work can be seen more broadly in the context of testing properties of distributions over large discrete domains . Such questions are studied at the crossroads of social networks , statistics , information theory , database algorithms , and machine learning algorithms . Examples of specific properties that have been extensively considered include testing whether the distribution is uniform , Gaussian , high entropy , independent or monotone increasing ( see e.g . Rubinfeld ( 2012 ) ; Canonne ( 2015 ) ; Goldreich ( 2017 ) for surveys on the topic ) . 2 LEARNING-BASED ALGORITHM . 2.1 PRELIMINARIES . Problem setting and notation . The support estimation problem is formally defined as follows . We are given sample access to an unknown distribution P over a discrete domain of size n. For simplicity , we identify the domain with [ n ] = { 1 , . . . , n } . Let pi denote the probability of element i . Let S ( P ) = { i : pi > 0 } be the support of P . Our goal is to estimate the support size S = |S ( P ) | using as few samples as possible . In particular , given ε > 0 , the goal is to output an estimate S̃ that satisfies S̃ ∈ [ S − εn , S + εn ] . We assume that the minimal non-zero mass of any element is at least 1/n , namely , that pi ≥ 1/n for every i ∈ S ( P ) . This is a standard promise in the support estimation problem ( see , e.g. , Raskhodnikova et al . ( 2009 ) ; Valiant & Valiant ( 2011 ) ; Wu & Yang ( 2019 ) ) , and as mentioned earlier , it naturally holds in the context of counting distinct elements , where pi is defined as the count of element i in the sample divided by n. Furthermore , a lower bound on the minimum non-zero probability is a necessary assumption without which no estimation algorithm is possible , even if the number of samples is allowed to be an arbitrarily large function of n , i.e. , not just sublinear algorithms . The reason is that there could be arbitrarily many elements with exceedingly small probabilities that would never be observed . See for example the discussion in the supplementary Section 5 of Orlitsky et al . ( 2016 ) . In the learning-based setting , we furthermore assume we have a predictor Π that can provide information about pi . In our analysis , we will assume that Π ( i ) is a constant factor approximation of each pi . In order to bound the running time of our algorithms , we assume we are given access to a ready-made predictor and ( as in Canonne & Rubinfeld ( 2014 ) ) that evaluating Π ( i ) takes unit time . In our experiments , we use neural network based predictors from Hsu et al . ( 2019 ) . In general , predictors need to be trained ( or otherwise produced ) in advance . This happens in a preprocessing stage , before the input distribution is given to the algorithm , and this stage is not accounted for in the sublinear running time . We also note that training the predictor needs to be done only once for all future inputs ( not once per input ) . The Wu & Yang ( 2019 ) estimator . In the classical setting ( without access to a predictor ) , Wu & Yang ( 2019 ) gave a sample-optimal algorithm based on Chebyshev polynomials . We now describe it briefly , as it forms the basis for our learning-based algorithm . Suppose we draw N samples , and let Ni be the number of times element i is observed . The output estimate of Wu & Yang ( 2019 ) is of the form S̃WY = ∑ i∈ [ n ] ( 1 + f ( Ni ) ) , where f ( Ni ) is a correction term intended to compensate for the fact that some elements in the support do not appear in the sample at all . If pi = 0 , then necessarily Ni = 0 ( as i can not appear in the sample ) . Thus , choosing f ( 0 ) = −1 ensures that unsupported elements contribute nothing to S̃WY . On the other hand , if pi > lognN , then by standard concentration we have Ni > Ω ( log n ) with high probability ; thus choosing f ( Ni ) = 0 for all Ni > L = Ω ( log n ) ensures that high-mass elements are only counted once in S̃WY . It remains to take care of elements i with pi ∈ [ 1n , logn N ] . By a standard Poissonization trick , the expected additive error |S − E [ S̃WY ] | can be bounded by∑ i∈ [ n ] |PL ( pi ) | , where PL is the degree-L polynomial PL ( x ) = L∑ k=0 E [ N ] k k ! · f ( k ) · xk . To make the error as small as possible , we would like to choose f ( 1 ) , . . . , f ( L ) so as to minimize |PL ( pi ) | on the interval pi ∈ [ 1n , logn N ] , under the constraint PL ( 0 ) = −1 ( which is equivalent to f ( 0 ) = −1 ) . This is a well-known extremal problem , and its solution is given by Chebyshev polynomials , whose coefficients have a known explicit formula . Indeed , Wu & Yang ( 2019 ) show that choosing f ( 1 ) , . . . , f ( L ) such that E [ N ] k k ! f ( k ) are the coefficients of an ( appropriately shifted and scaled ) Chebyshev polynomial leads to an optimal sample complexity ofO ( log2 ( 1/ε ) ·n/ log n ) .
This paper considers the support size estimation problem using a random sample from the unknown distribution and access to some predictor of the element frequency. Under that setting, the paper improves the estimator by Wu & Yang (2019) by refining the approximation interval promised by the predicted frequency. A theoretical upper bound on the sample complexity is proved in Theorem 1 using the proposed algorithm, and it is nearly optimal as shown by the lower bound in Theorem 2. The algorithm is empirically evaluated by both real and synthetic datasets. The empirical performance improves existing algorithms of WY and CR in most cases.
SP:83c6819a4c6458305ec079213fb6fb3ffdcfdcb8
Learning-based Support Estimation in Sublinear Time
log ( 1/ε ) · n1−Θ ( 1/ log ( 1/ε ) ) . We evaluate the proposed algorithms on a collection of data sets , using the neuralnetwork based estimators from Hsu et al , ICLR ’ 19 as predictors . Our experiments demonstrate substantial ( up to 3x ) improvements in the estimation accuracy compared to the state of the art algorithm . 1 INTRODUCTION . Estimating the support size of a distribution from random samples is a fundamental problem with applications in many domains . In biology , it is used to estimate the number of distinct species from experiments ( Fisher et al. , 1943 ) ; in genomics to estimate the number of distinct protein encoding regions ( Zou et al. , 2016 ) ; in computer systems to approximate the number of distinct blocks on a disk drive ( Harnik et al. , 2016 ) , etc . The problem has also applications in linguistics , query optimization in databases , and other fields . Because of its wide applicability , the problem has received plenty of attention in multiple fields1 , including statistics and theoretical computer science , starting with the seminal works of Good and Turing Good ( 1953 ) and Fisher et al . ( 1943 ) . A more recent line of research pursued over the last decade ( Raskhodnikova et al. , 2009 ; Valiant & Valiant , 2011 ; 2013 ; Wu & Yang , 2019 ) focused on the following formulation of the problem : given access to independent samples from a distribution ∗Authors listed in alphabetical order 1A partial bibliography from 2007 contains over 900 references . It is available at https : //courses.cit.cornell.edu/jab18/bibliography.html . P over a discrete domain { 0 , . . . , n − 1 } whose minimum non-zero mass2 is at least 1/n , estimate the support size of P up to ±εn . The state of the art estimator , due to Valiant & Valiant ( 2011 ) ; Wu & Yang ( 2019 ) , solves this problem using only O ( n/ log n ) samples ( for a constant ε ) . Both papers also show that this bound is tight . A more straightforward linear-time algorithm exists , which reports the number of distinct elements seen in a sample of size N = O ( n log ε−1 ) ( which is O ( n ) for constant ε ) , without accounting for the unseen items . This algorithm succeeds because each element i with non-zero mass ( and thus mass at least 1/n ) appears in the sample with probability at least 1 − ( 1 − 1/n ) N > 1 − ε , so in expectation , at most ε · n elements with non-zero mass will not appear in the sample . Thus , in general , the number of samples required by the best possible algorithm ( i.e. , n/ log n ) is only logarithmically smaller than the complexity of the straightforward linear-time algorithm . A natural approach to improve over this bound is to leverage the fact that in many applications , the input distribution is not entirely unknown . Indeed , one can often obtain rough approximations of the element frequencies by analyzing different but related distributions . For example , in genomics , frequency estimates can be obtained from the frequencies of genome regions of different species ; in linguistics they can be inferred from the statistical properties of the language ( e.g. , long words are rare ) , or from a corpus of writings of a different but related author , etc . More generally , such estimates can be learned using modern machine learning techniques , given the true element frequencies in related data sets . The question then becomes whether one can utilize such predictors for use in support size estimation procedures in order to improve the estimation accuracy . Our results In this paper we initiate the study of such “ learning-based ” methods for support size estimation . Our contributions are both theoretical and empirical . On the theory side , we show that given a “ good enough ” predictor of the distribution P , one can solve the problem using much fewer than n/ log n samples . Specifically , suppose that in the input distribution P the probability of element i is pi , and that we have access to a predictor Π ( i ) such that Π ( i ) ≤ pi ≤ b ·Π ( i ) for some constant approximation factor b ≥ 1.3 Then we give an algorithm that estimates the support size up to ±εn using only log ( 1/ε ) · n1−Θ ( 1/ log ( 1/ε ) ) samples , assuming the approximation factor b is a constant ( see Theorem 1 for a more detailed bound ) . This improves over the bound of Wu & Yang ( 2019 ) for any fixed values of the accuracy parameter ε and predictor quality factor b . Furthermore , we show that this bound is almost tight . Our algorithm is presented in Algorithm 1 . On a high level , it partitions the range of probability values into geometrically increasing intervals . We then use the predictor to assign the elements observed in the sample to these intervals , and produce a Wu-Yang-like estimate within each interval . Specifically , our estimator is based on Chebyshev polynomials ( as in Valiant & Valiant ( 2011 ) ; Wu & Yang ( 2019 ) ) , but the finer partitioning into intervals allows us to use polynomials with different , carefully chosen parameters . This leads to significantly improved sample complexity if the predictor is sufficiently accurate . On the empirical side , we evaluate the proposed algorithms on a collection of real and synthetic data sets . For the real data sets ( network traffic data and AOL query log data ) we use neural-network based predictors from Hsu et al . ( 2019 ) . Although those predictors do not always approximate the true distribution probabilities up to a small factor , our experiments nevertheless demonstrate that the new algorithm offers substantial improvements ( up to 3x reduction in relative error ) in the estimation accuracy compared to the state of the art algorithm of Wu & Yang ( 2019 ) . 1.1 RELATED WORK . Estimating support size As described in the introduction , the problem has been studied extensively in statistics and theoretical computer science . The best known algorithm , due to Wu & Yang 2This constraint is naturally satisfied e.g. , if the distribution P is an empirical distribution over a data set of n items . In fact , in this case all probabilities are multiples of 1/n so the support size is equal to the number of distinct elements in the data set . 3Our results hold without change if we modify the assumption to r ·Π ( i ) ≤ pi ≤ r · b ·Π ( i ) , for any r > 0 . We use r = 1 for simplicity . ( 2019 ) , uses O ( log2 ( 1/ε ) · n/ log n ) samples . Because of the inherent limitations of the model that uses only random samples , Canonne & Rubinfeld ( 2014 ) considered an augmented model where an algorithm has access to the exact probability of any sampled item . The authors show that this augmentation is very powerful , reducing the sampling complexity to only O ( 1/ε2 ) . More recently , Onak & Sun ( 2018 ) proved that Canonne and Rubinfeld ’ s algorithm works as long as the probabilities accessed are accurate up to a ( 1 ± ε3 ) -multiplicative factor . However , this algorithm strongly relies on the probabilities being extremely accurate , and the predicted probabilities even being off by a small constant factor can cause the support size estimate to become massively incorrect . As a result , their algorithm is not robust to mispredicted probabilities , as our experiments show . A different line of research studied streaming algorithms for estimating the number of distinct elements . Such algorithms have access to the whole data set , but must read it in a single pass using limited memory . The best known algorithms for this problem compute a ( 1 + ε ) -approximate estimation to the number of distinct elements using O ( 1/ε2 + log n ) bits of storage ( Kane et al. , 2010 ) . See the discussion in that paper for a history of the problem and further references . Learning-based algorithms Over the last few years , there has been a growing interest in using machine learning techniques to improve the performance of “ classical ” algorithms . This methodology found applications in similarity search ( Wang et al. , 2016 ; Sablayrolles et al. , 2019 ; Dong et al. , 2020 ) , graph optimization ( Khalil et al. , 2017 ; Balcan et al. , 2018 ) , data structures ( Kraska et al. , 2018 ; Mitzenmacher , 2018 ) , online algorithms ( Lykouris & Vassilvitskii , 2018 ; Purohit et al. , 2018 ) , compressed sensing ( Mousavi et al. , 2015 ; Baldassarre et al. , 2016 ; Bora et al. , 2017 ) and streaming algorithms ( Hsu et al. , 2019 ; Jiang et al. , 2019 ) . The last two papers are closest to our work , as they solve various computational problems over data streams , including distinct elements estimation in Jiang et al . ( 2019 ) using frequency predictors . Furthermore , in our experiments we are using the neural-network-based predictors developed in Hsu et al . ( 2019 ) . However , our algorithm operates in a fundamentally different model , using a sublinear ( in n ) number of samples of the input , as opposed to accessing the full input via a linear scan . Thus , our algorithms run in sublinear time , in contrast to streaming algorithms that use sublinear space . Distribution property testing This work can be seen more broadly in the context of testing properties of distributions over large discrete domains . Such questions are studied at the crossroads of social networks , statistics , information theory , database algorithms , and machine learning algorithms . Examples of specific properties that have been extensively considered include testing whether the distribution is uniform , Gaussian , high entropy , independent or monotone increasing ( see e.g . Rubinfeld ( 2012 ) ; Canonne ( 2015 ) ; Goldreich ( 2017 ) for surveys on the topic ) . 2 LEARNING-BASED ALGORITHM . 2.1 PRELIMINARIES . Problem setting and notation . The support estimation problem is formally defined as follows . We are given sample access to an unknown distribution P over a discrete domain of size n. For simplicity , we identify the domain with [ n ] = { 1 , . . . , n } . Let pi denote the probability of element i . Let S ( P ) = { i : pi > 0 } be the support of P . Our goal is to estimate the support size S = |S ( P ) | using as few samples as possible . In particular , given ε > 0 , the goal is to output an estimate S̃ that satisfies S̃ ∈ [ S − εn , S + εn ] . We assume that the minimal non-zero mass of any element is at least 1/n , namely , that pi ≥ 1/n for every i ∈ S ( P ) . This is a standard promise in the support estimation problem ( see , e.g. , Raskhodnikova et al . ( 2009 ) ; Valiant & Valiant ( 2011 ) ; Wu & Yang ( 2019 ) ) , and as mentioned earlier , it naturally holds in the context of counting distinct elements , where pi is defined as the count of element i in the sample divided by n. Furthermore , a lower bound on the minimum non-zero probability is a necessary assumption without which no estimation algorithm is possible , even if the number of samples is allowed to be an arbitrarily large function of n , i.e. , not just sublinear algorithms . The reason is that there could be arbitrarily many elements with exceedingly small probabilities that would never be observed . See for example the discussion in the supplementary Section 5 of Orlitsky et al . ( 2016 ) . In the learning-based setting , we furthermore assume we have a predictor Π that can provide information about pi . In our analysis , we will assume that Π ( i ) is a constant factor approximation of each pi . In order to bound the running time of our algorithms , we assume we are given access to a ready-made predictor and ( as in Canonne & Rubinfeld ( 2014 ) ) that evaluating Π ( i ) takes unit time . In our experiments , we use neural network based predictors from Hsu et al . ( 2019 ) . In general , predictors need to be trained ( or otherwise produced ) in advance . This happens in a preprocessing stage , before the input distribution is given to the algorithm , and this stage is not accounted for in the sublinear running time . We also note that training the predictor needs to be done only once for all future inputs ( not once per input ) . The Wu & Yang ( 2019 ) estimator . In the classical setting ( without access to a predictor ) , Wu & Yang ( 2019 ) gave a sample-optimal algorithm based on Chebyshev polynomials . We now describe it briefly , as it forms the basis for our learning-based algorithm . Suppose we draw N samples , and let Ni be the number of times element i is observed . The output estimate of Wu & Yang ( 2019 ) is of the form S̃WY = ∑ i∈ [ n ] ( 1 + f ( Ni ) ) , where f ( Ni ) is a correction term intended to compensate for the fact that some elements in the support do not appear in the sample at all . If pi = 0 , then necessarily Ni = 0 ( as i can not appear in the sample ) . Thus , choosing f ( 0 ) = −1 ensures that unsupported elements contribute nothing to S̃WY . On the other hand , if pi > lognN , then by standard concentration we have Ni > Ω ( log n ) with high probability ; thus choosing f ( Ni ) = 0 for all Ni > L = Ω ( log n ) ensures that high-mass elements are only counted once in S̃WY . It remains to take care of elements i with pi ∈ [ 1n , logn N ] . By a standard Poissonization trick , the expected additive error |S − E [ S̃WY ] | can be bounded by∑ i∈ [ n ] |PL ( pi ) | , where PL is the degree-L polynomial PL ( x ) = L∑ k=0 E [ N ] k k ! · f ( k ) · xk . To make the error as small as possible , we would like to choose f ( 1 ) , . . . , f ( L ) so as to minimize |PL ( pi ) | on the interval pi ∈ [ 1n , logn N ] , under the constraint PL ( 0 ) = −1 ( which is equivalent to f ( 0 ) = −1 ) . This is a well-known extremal problem , and its solution is given by Chebyshev polynomials , whose coefficients have a known explicit formula . Indeed , Wu & Yang ( 2019 ) show that choosing f ( 1 ) , . . . , f ( L ) such that E [ N ] k k ! f ( k ) are the coefficients of an ( appropriately shifted and scaled ) Chebyshev polynomial leads to an optimal sample complexity ofO ( log2 ( 1/ε ) ·n/ log n ) .
Estimation of the size of the support of a distribution over a discreet domain is a fundamental problem. In the standard setting, this problem is theoretically well-understood with matching upper and lower bounds. The authors assume additional access to a constant approximation of the density function at each point, and then show that this can provably reduces the sample complexity. In particular, they offer matching upper and lower bounds in this setting. While the upper bound is a twist on the existing state-of-the-art method, the lower bound seem to deviate from that.
SP:83c6819a4c6458305ec079213fb6fb3ffdcfdcb8
Federated Learning's Blessing: FedAvg has Linear Speedup
1 INTRODUCTION . Federated learning ( FL ) is a machine learning paradigm where many clients ( e.g. , mobile devices or organizations ) collaboratively train a model under the orchestration of a central server ( e.g. , service provider ) , while keeping the training data decentralized ( Smith et al . ( 2017 ) ; Kairouz et al . ( 2019 ) ) . In recent years , FL has swiftly emerged as an important learning paradigm ( McMahan et al . ( 2017 ) ; Li et al . ( 2020a ) ) –one that enjoys widespread success in applications such as personalized recommendation ( Chen et al . ( 2018 ) ) , virtual assistant ( Lam et al . ( 2019 ) ) , and keyboard prediction ( Hard et al . ( 2018 ) ) , to name a few–for at least three reasons : First , the rapid proliferation of smart devices that are equipped with both computing power and data-capturing capabilities provided the infrastructure core for FL . Second , the rising awareness of privacy and the explosive growth of computational power in mobile devices have made it increasingly attractive to push the computation to the edge . Third , the empirical success of communication-efficient FL algorithms has enabled increasingly larger-scale parallel computing and learning with less communication overhead . Despite its promise and broad applicability in our current era , the potential value FL delivers is coupled with the unique challenges it brings forth . In particular , when FL learns a single statistical model using data from across all the devices while keeping each individual device ’ s data isolated ( Kairouz et al . ( 2019 ) ) , it faces two challenges that are absent in centralized optimization and distributed ( stochastic ) optimization ( Zhou & Cong ( 2018 ) ; Stich ( 2019 ) ; Khaled et al . ( 2019 ) ; Liang et al . ( 2019 ) ; Wang & Joshi ( 2018 ) ; Woodworth et al . ( 2018 ) ; Wang et al . ( 2019 ) ; Jiang & Agrawal ( 2018 ) ; Yu et al . ( 2019b ; a ) ; Khaled et al . ( 2020 ) ; Koloskova et al . ( 2020 ) ; Woodworth et al . ( 2020b ; a ) ) : 1 ) Data ( statistical ) heterogeneity : data distributions in devices are different ( and data can not be shared ) ; κ1κ̃ are condition numbers defined in Section G. Since κ1 ≥ κ̃ , this implies a speedup factor of√ κ1 κ̃ for accelerated FedAvg . 2 ) System heterogeneity : only a subset of devices may access the central server at each time both because the communications bandwidth profiles vary across devices and because there is no central server that has control over when a device is active ( the presence of “ stragglers ” ) . To address these challenges , Federated Averaging ( FedAvg ) McMahan et al . ( 2017 ) was proposed as a particularly effective heuristic , which has enjoyed great empirical success . This success has since motivated a growing line of research efforts into understanding its theoretical convergence guarantees in various settings . For instance , Haddadpour & Mahdavi ( 2019 ) analyzed FedAvg ( for non-convex smooth problems satisfying PL conditions ) under the assumption that each local device ’ s minimizer is the same as the minimizer of the joint problem ( if all devices ’ data is aggregated together ) , an overly restrictive assumption that restricts the extent of data heterogeneity . Very recently , Li et al . ( 2020b ) furthered the progress and established an O ( 1T ) convergence rate for FedAvg for strongly convex smooth Federated learning problems with both data and system heterogeneity . A similar result in the same setting Karimireddy et al . ( 2019 ) also established an O ( 1T ) result that allows for a linear speedup when the number of participating devices is large . At the same time , Huo et al . ( 2020 ) studied the Nesterov accelerated FedAvg for non-convex smooth problems and established anO ( 1√ T ) convergence rate to stationary points . However , despite these very recent fruitful pioneering efforts into understanding the theoretical convergence properties of FedAvg , it remains open as to how the number of devices–particularly the number of devices that participate in the computation–affects the convergence speed . In particular , is linear speedup of FedAvg a universal phenomenon across different settings and for any number of devices ? What about when FedAvg is accelerated with momentum updates ? Does the presence of both data and system heterogeneity in FL imply different communication complexities and require technical novelties over results in distributed and decentralized optimization ? These aspects are currently unexplored or underexplored in FL . We fill in the gaps here by providing affirmative answers . Our Contributions We provide a comprehensive and unified convergence analysis of FedAvg and its accelerated variants in the presence of both data and system heterogeneity . Our contributions are threefold . First , we establish an O ( 1/KT ) convergence rate under FedAvg for strongly convex and smooth problems and an O ( 1/ √ KT ) convergence rate for convex and smooth problems ( where K is the number of participating devices ) , thereby establishing that FedAvg enjoys the desirable linear speedup property in the FL setup . Prior to our work here , the best and the most related convergence analysis is given by Li et al . ( 2020b ) and Karimireddy et al . ( 2019 ) , which established an O ( 1T ) convergence rate for strongly convex smooth problems under FedAvg . Our rate matches the same ( and optimal ) dependence on T , but also completes the picture by establishing the linear dependence on K , for any K ≤ N , where N is the total number of devices , whereas Li et al . ( 2020b ) does not have linear speedup analysis , and Karimireddy et al . ( 2019 ) only allows linear speedup close to full participation ( K = O ( N ) ) . As for convex and smooth problems , there was no prior work that established the O ( 1√ T ) rate under both system and data heterogeneity . Our unified analysis highlights the common elements and distinctions between the strongly and convex settings . Second , we establish the same convergence rates–O ( 1/KT ) for strongly convex and smooth problems and O ( 1/ √ KT ) for convex and smooth problems–for Nesterov accelerated FedAvg . We analyze the accelerated version of FedAvg here because empirically it tends to perform better ; yet , its theoretical convergence guarantee is unknown . To the best of our knowledge , these are the first results that provide a linear speedup characterization of Nesterov accelerated FedAvg in those two problem classes ( that FedAvg and Nesterov accelerated FedAvg share the same convergence rate is to be expected : this is the case even for centralized stochastic optimization ) . Prior to our results here , the most relevant results Yu et al . ( 2019a ) ; Li et al . ( 2020a ) ; Huo et al . ( 2020 ) only concern the non-convex setting , where convergence is measured with respect to stationary points ( vanishing of gradient norms , rather than optimality gaps ) . Our unified analysis of Nesterov FedAvg also illustrates the technical similarities and distinctions compared to the original FedAvg algorithm , whereas prior works ( in the non-convex setting ) were scattered and used different notations . Third , we study a subclass of strongly convex smooth problems where the objective is overparameterized and establish a faster O ( exp ( −KTκ ) ) convergence rate for FedAvg , in contrast to the O ( exp ( −Tκ ) ) rate for individual solvers Ma et al . ( 2018 ) . Within this class , we further consider the linear regression problem and establish an even sharper rate under FedAvg . In addition , we propose a new variant of accelerated FedAvg based on a momentum update of Liu & Belkin ( 2020 ) –MaSS accelerated FedAvg–and establish a faster convergence rate ( compared to if no acceleration is used ) . This stands in contrast to generic ( strongly ) convex stochastic problems where theoretically no rate improvement is obtained when one accelerates FedAvg . The detailed convergence results are summarized in Table 1 . Connections with Distributed and Decentralized Optimization Federated learning is closely related to distributed and decentralized optimization , and as such it is important to discuss connections and distinctions between our work and related results from that literature . First , when there is neither system heterogeneity , i.e . all devices participate in parameter averaging during a communication round , nor statistical heterogeneity , i.e . all devices have access to a common set of stochastic gradients , FedAvg coincides with the “ Local SGD ” of Stich ( 2019 ) , which showed the linear speedup rate O ( 1/NT ) for strongly convex and smooth functions . Woodworth et al . ( 2020b ) and Woodworth et al . ( 2020a ) further improved the communication complexity that guarantees the linear speedup rate . When there is only data heterogeneity , some works have continued to use the term Local SGD to refer to FedAvg , while others subsume it in more general frameworks that include decentralized model averaging based on a network topology or a mixing matrix . They have provided linear speedup analyses for strongly convex and convex problems , e.g . Khaled et al . ( 2020 ) ; Koloskova et al . ( 2020 ) as well as non-convex problems , e.g . Jiang & Agrawal ( 2018 ) ; Yu et al . ( 2019b ) ; Wang & Joshi ( 2018 ) . However , these results do not consider system heterogeneity , i.e . the presence of stragglers in the device network . Even with decentralized model averaging , the assumptions usually imply that model averages over all devices is the same as decentralized model averages based on network topology ( e.g . Koloskova et al . ( 2020 ) Proposition 1 ) , which precludes system heterogeneity as defined in this paper and prevalent in FL problems . For momentum accelerated FedAvg , Yu et al . ( 2019a ) provided linear speedup analysis for non-convex problems , while results for strongly convex and convex settings are entirely lacking , even without system heterogeneity . Karimireddy et al . ( 2019 ) considers both types of heterogeneities for FedAvg , but their rate implies a linear speedup only when the number of stragglers is negligible . In contrast , our linear speedup analyses consider both types of heterogeneity present in the full federated learning setting , and are valid for any number of participating devices . We also highlight a distinction in communication efficiency when system heterogeneity is present . Moreover , our results for Nesterov accelerated FedAvg completes the picture for strongly convex and convex problems . For a detailed comparison with related works , please refer to Table 2 in Appendix Section B . 2 SETUP . In this paper , we study the following federated learning problem : min w { F ( w ) , ∑N k=1 pkFk ( w ) } , ( 1 ) where N is the number of local devices ( users/nodes/workers ) and pk is the k-th device ’ s weight satisfying pk ≥ 0 and ∑N k=1 pk = 1 . In the k-th local device , there are nk data points : x 1 k , x 2 k , . . . , x nk k . The local objective Fk ( · ) is defined as : Fk ( w ) , 1nk ∑nk j=1 ` ( w ; xjk ) , where ` denotes a userspecified loss function . Each device only has access to its local data , which gives rise to its own local objective Fk . Note that we do not make any assumptions on the data distributions of each local device . The local minimum F ∗k = minw Fk ( w ) can be far from the global minimum of Eq ( 1 ) ( data heterogeneity ) .
This paper shows a linear speedup in FedAvg w.r.t. number of devices, mainly theoretically, while most prior works ignore it. The main convergence results are given for three cases: a) Strongly Convex+Smooth, b) Convex+Smoth, and c) Strongly/x convex+Smooth+0 training loss can be achieved. The paper is well-written and motivated with good discussions of the algorithm and the related works.
SP:ceb502f2595c97afdce83fd3a98bcacbdb98e5c5
Federated Learning's Blessing: FedAvg has Linear Speedup
1 INTRODUCTION . Federated learning ( FL ) is a machine learning paradigm where many clients ( e.g. , mobile devices or organizations ) collaboratively train a model under the orchestration of a central server ( e.g. , service provider ) , while keeping the training data decentralized ( Smith et al . ( 2017 ) ; Kairouz et al . ( 2019 ) ) . In recent years , FL has swiftly emerged as an important learning paradigm ( McMahan et al . ( 2017 ) ; Li et al . ( 2020a ) ) –one that enjoys widespread success in applications such as personalized recommendation ( Chen et al . ( 2018 ) ) , virtual assistant ( Lam et al . ( 2019 ) ) , and keyboard prediction ( Hard et al . ( 2018 ) ) , to name a few–for at least three reasons : First , the rapid proliferation of smart devices that are equipped with both computing power and data-capturing capabilities provided the infrastructure core for FL . Second , the rising awareness of privacy and the explosive growth of computational power in mobile devices have made it increasingly attractive to push the computation to the edge . Third , the empirical success of communication-efficient FL algorithms has enabled increasingly larger-scale parallel computing and learning with less communication overhead . Despite its promise and broad applicability in our current era , the potential value FL delivers is coupled with the unique challenges it brings forth . In particular , when FL learns a single statistical model using data from across all the devices while keeping each individual device ’ s data isolated ( Kairouz et al . ( 2019 ) ) , it faces two challenges that are absent in centralized optimization and distributed ( stochastic ) optimization ( Zhou & Cong ( 2018 ) ; Stich ( 2019 ) ; Khaled et al . ( 2019 ) ; Liang et al . ( 2019 ) ; Wang & Joshi ( 2018 ) ; Woodworth et al . ( 2018 ) ; Wang et al . ( 2019 ) ; Jiang & Agrawal ( 2018 ) ; Yu et al . ( 2019b ; a ) ; Khaled et al . ( 2020 ) ; Koloskova et al . ( 2020 ) ; Woodworth et al . ( 2020b ; a ) ) : 1 ) Data ( statistical ) heterogeneity : data distributions in devices are different ( and data can not be shared ) ; κ1κ̃ are condition numbers defined in Section G. Since κ1 ≥ κ̃ , this implies a speedup factor of√ κ1 κ̃ for accelerated FedAvg . 2 ) System heterogeneity : only a subset of devices may access the central server at each time both because the communications bandwidth profiles vary across devices and because there is no central server that has control over when a device is active ( the presence of “ stragglers ” ) . To address these challenges , Federated Averaging ( FedAvg ) McMahan et al . ( 2017 ) was proposed as a particularly effective heuristic , which has enjoyed great empirical success . This success has since motivated a growing line of research efforts into understanding its theoretical convergence guarantees in various settings . For instance , Haddadpour & Mahdavi ( 2019 ) analyzed FedAvg ( for non-convex smooth problems satisfying PL conditions ) under the assumption that each local device ’ s minimizer is the same as the minimizer of the joint problem ( if all devices ’ data is aggregated together ) , an overly restrictive assumption that restricts the extent of data heterogeneity . Very recently , Li et al . ( 2020b ) furthered the progress and established an O ( 1T ) convergence rate for FedAvg for strongly convex smooth Federated learning problems with both data and system heterogeneity . A similar result in the same setting Karimireddy et al . ( 2019 ) also established an O ( 1T ) result that allows for a linear speedup when the number of participating devices is large . At the same time , Huo et al . ( 2020 ) studied the Nesterov accelerated FedAvg for non-convex smooth problems and established anO ( 1√ T ) convergence rate to stationary points . However , despite these very recent fruitful pioneering efforts into understanding the theoretical convergence properties of FedAvg , it remains open as to how the number of devices–particularly the number of devices that participate in the computation–affects the convergence speed . In particular , is linear speedup of FedAvg a universal phenomenon across different settings and for any number of devices ? What about when FedAvg is accelerated with momentum updates ? Does the presence of both data and system heterogeneity in FL imply different communication complexities and require technical novelties over results in distributed and decentralized optimization ? These aspects are currently unexplored or underexplored in FL . We fill in the gaps here by providing affirmative answers . Our Contributions We provide a comprehensive and unified convergence analysis of FedAvg and its accelerated variants in the presence of both data and system heterogeneity . Our contributions are threefold . First , we establish an O ( 1/KT ) convergence rate under FedAvg for strongly convex and smooth problems and an O ( 1/ √ KT ) convergence rate for convex and smooth problems ( where K is the number of participating devices ) , thereby establishing that FedAvg enjoys the desirable linear speedup property in the FL setup . Prior to our work here , the best and the most related convergence analysis is given by Li et al . ( 2020b ) and Karimireddy et al . ( 2019 ) , which established an O ( 1T ) convergence rate for strongly convex smooth problems under FedAvg . Our rate matches the same ( and optimal ) dependence on T , but also completes the picture by establishing the linear dependence on K , for any K ≤ N , where N is the total number of devices , whereas Li et al . ( 2020b ) does not have linear speedup analysis , and Karimireddy et al . ( 2019 ) only allows linear speedup close to full participation ( K = O ( N ) ) . As for convex and smooth problems , there was no prior work that established the O ( 1√ T ) rate under both system and data heterogeneity . Our unified analysis highlights the common elements and distinctions between the strongly and convex settings . Second , we establish the same convergence rates–O ( 1/KT ) for strongly convex and smooth problems and O ( 1/ √ KT ) for convex and smooth problems–for Nesterov accelerated FedAvg . We analyze the accelerated version of FedAvg here because empirically it tends to perform better ; yet , its theoretical convergence guarantee is unknown . To the best of our knowledge , these are the first results that provide a linear speedup characterization of Nesterov accelerated FedAvg in those two problem classes ( that FedAvg and Nesterov accelerated FedAvg share the same convergence rate is to be expected : this is the case even for centralized stochastic optimization ) . Prior to our results here , the most relevant results Yu et al . ( 2019a ) ; Li et al . ( 2020a ) ; Huo et al . ( 2020 ) only concern the non-convex setting , where convergence is measured with respect to stationary points ( vanishing of gradient norms , rather than optimality gaps ) . Our unified analysis of Nesterov FedAvg also illustrates the technical similarities and distinctions compared to the original FedAvg algorithm , whereas prior works ( in the non-convex setting ) were scattered and used different notations . Third , we study a subclass of strongly convex smooth problems where the objective is overparameterized and establish a faster O ( exp ( −KTκ ) ) convergence rate for FedAvg , in contrast to the O ( exp ( −Tκ ) ) rate for individual solvers Ma et al . ( 2018 ) . Within this class , we further consider the linear regression problem and establish an even sharper rate under FedAvg . In addition , we propose a new variant of accelerated FedAvg based on a momentum update of Liu & Belkin ( 2020 ) –MaSS accelerated FedAvg–and establish a faster convergence rate ( compared to if no acceleration is used ) . This stands in contrast to generic ( strongly ) convex stochastic problems where theoretically no rate improvement is obtained when one accelerates FedAvg . The detailed convergence results are summarized in Table 1 . Connections with Distributed and Decentralized Optimization Federated learning is closely related to distributed and decentralized optimization , and as such it is important to discuss connections and distinctions between our work and related results from that literature . First , when there is neither system heterogeneity , i.e . all devices participate in parameter averaging during a communication round , nor statistical heterogeneity , i.e . all devices have access to a common set of stochastic gradients , FedAvg coincides with the “ Local SGD ” of Stich ( 2019 ) , which showed the linear speedup rate O ( 1/NT ) for strongly convex and smooth functions . Woodworth et al . ( 2020b ) and Woodworth et al . ( 2020a ) further improved the communication complexity that guarantees the linear speedup rate . When there is only data heterogeneity , some works have continued to use the term Local SGD to refer to FedAvg , while others subsume it in more general frameworks that include decentralized model averaging based on a network topology or a mixing matrix . They have provided linear speedup analyses for strongly convex and convex problems , e.g . Khaled et al . ( 2020 ) ; Koloskova et al . ( 2020 ) as well as non-convex problems , e.g . Jiang & Agrawal ( 2018 ) ; Yu et al . ( 2019b ) ; Wang & Joshi ( 2018 ) . However , these results do not consider system heterogeneity , i.e . the presence of stragglers in the device network . Even with decentralized model averaging , the assumptions usually imply that model averages over all devices is the same as decentralized model averages based on network topology ( e.g . Koloskova et al . ( 2020 ) Proposition 1 ) , which precludes system heterogeneity as defined in this paper and prevalent in FL problems . For momentum accelerated FedAvg , Yu et al . ( 2019a ) provided linear speedup analysis for non-convex problems , while results for strongly convex and convex settings are entirely lacking , even without system heterogeneity . Karimireddy et al . ( 2019 ) considers both types of heterogeneities for FedAvg , but their rate implies a linear speedup only when the number of stragglers is negligible . In contrast , our linear speedup analyses consider both types of heterogeneity present in the full federated learning setting , and are valid for any number of participating devices . We also highlight a distinction in communication efficiency when system heterogeneity is present . Moreover , our results for Nesterov accelerated FedAvg completes the picture for strongly convex and convex problems . For a detailed comparison with related works , please refer to Table 2 in Appendix Section B . 2 SETUP . In this paper , we study the following federated learning problem : min w { F ( w ) , ∑N k=1 pkFk ( w ) } , ( 1 ) where N is the number of local devices ( users/nodes/workers ) and pk is the k-th device ’ s weight satisfying pk ≥ 0 and ∑N k=1 pk = 1 . In the k-th local device , there are nk data points : x 1 k , x 2 k , . . . , x nk k . The local objective Fk ( · ) is defined as : Fk ( w ) , 1nk ∑nk j=1 ` ( w ; xjk ) , where ` denotes a userspecified loss function . Each device only has access to its local data , which gives rise to its own local objective Fk . Note that we do not make any assumptions on the data distributions of each local device . The local minimum F ∗k = minw Fk ( w ) can be far from the global minimum of Eq ( 1 ) ( data heterogeneity ) .
This paper gives convergence analysis for FedAvg and its accelerated version under data heterogeneity and system heterogeneity. The main improvement comes from a more careful analysis for one-step descent where the authors make use of the term $\alpha_{t} \sum_{k=1}^{N} p_{k}\left[F_{k}\left(\mathbf{w}^{*}\right)-F_{k}\left(\overline{\mathbf{w}}_{t}\right)\right]$ that is ignored by previous work Li (2020b). The paper focuses on how FedAvg’s convergence scales with the number of participating devices. It improves previous analysis for FedAvg under more federated settings and shows that FedAvg has linear speedup for any number of participating devices.
SP:ceb502f2595c97afdce83fd3a98bcacbdb98e5c5
Error Controlled Actor-Critic Method to Reinforcement Learning
1 INTRODUCTION . Reinforcement learning ( RL ) algorithms are combined with function approximation methods to adapt to the application scenarios whose state spaces are combinatorial , large , or even continuous . Many function approximation methods RL methods , including the Fourier basis ( Konidaris et al. , 2011 ) , kernel regression ( Xu , 2006 ; Barreto et al. , 2011 ; Bhat et al. , 2012 ) , and neural neworks ( Barto et al. , 1982 ; Tesauro , 1992 ; Boyan et al. , 1992 ; Gullapalli , 1992 ) have been used to learn value functions . In recent years , many deep reinforcement learning ( DRL ) methods were implemented by incorporating deep learning into RL methods . Deep Q-learning Network ( DQN ) ( Mnih et al. , 2013 ) reported by Mnih in 2013 is a typical work that uses a deep convolutional neural network ( CNN ) to represent a suitable action value function estimating future rewards ( returns ) ; it successfully learned end-to-end control policies for seven Atari 2600 games directly from large state spaces . Thereafter , deep RL methods , such as Deep Deterministic Policy Gradient ( DDPG ) ( Lillicrap et al. , 2016 ) , Proximal Policy Optimization ( PPO ) ( Schulman et al. , 2017 ) , Twin Delayed Deep Deterministic policy gradient ( TD3 ) ( Fujimoto et al. , 2018 ) , and Soft Actor-Critic ( SAC ) ( Haarnoja et al. , 2018 ) , started to become mainstream in the field of RL . Althouth function approximation methods have assisted reinforcement learning ( RL ) algorithms to perform well in complex problems by providing great representation power ; however , they also cause an issue called overestimation phenomenon that jeopardize the optimization process of RL algorithms . Thrun & Schwartz ( 1993 ) presented a theoretical analysis of this systematic overestimation phenomenon in Q-learning methods that use function approximation methods . Similar problem persists in the actor-critic methods employed function approximation methods . Thomas ( 2014 ) reported that several natural actor-critic algorithms use biased estimates of policy gradient to update parameters when using function approximation to approximate the action value function . Fujimoto et al . ( 2018 ) proved that the value estimation in the deterministic policy gradient method also lead to overestimation problem . In brief , the approximation errors of value functions caused the inaccuracy of estimated values , and such inaccuracy induced the overestimation on value function ; so that poor performances might be assigned to high reward values . As a result , policies with poor performance might be obtained . Previous works attempted to find direct strategies to effectively reduce the overestimation . Hasselt ( 2010 ) proposed Double Q-learning , in which the samples are divided into two sets to train two independent Q-function estimators . To diminish the overestimation , one Q-function estimator is used to select actions , and the other one is applied to estimate its value . Fujimoto et al . ( 2018 ) proposed mechanisms , including clipped double Q-learning and delayed policy updates , to minimize the overestimation . In contrast to these methods , we focus on actor-critic setting and manage to reduce the approximation error of value function , which is the source of the overestimation , in an indirect but effective way . We use the concepts of domain adaptation ( Ben-David et al. , 2010 ) to derive an upper boundary of the approximation error in Q function approximator . Then , we find that the least upper bound of this error can be obtained by minimizing the Kullback-Leibler divergence ( KL divergence ) between new policy and its previous one . This means minimizing the KL divergence when traning policy can stabilize the critic and then confine the approximation error in Q function . Interestingly , we arrive at similar conclusion as two literatures Geist et al . ( 2019 ) ; Vieillard et al . ( 2020 ) by a somewhat different route . In their works , the authors directly studied the effect of KL and entropy regularization in RL and proved that a KL regularization indeed leads to averaging errors made at each iteration of value function update . While our idea is very different from theirs : It is impracticable to minimize the approximation error directly , so instead we try to minimize an upper bound of approximation error . This is similar to Expectation-Maximization Algorithm ( Bishop , 2006 ) which maximize a lower bound of log-likelihood instead of log-likelihood directly . We derive an upper boundary of approximation error for Q function approximatorin actor-critic methods , and arrive at a more general conclusion : approximation error can be reduced by keep new policy close to the previous one . Note that KL penalty is a effective way , but not the only way . Furthermore , the mentioned indirect operation ( i.e . the KL penalty ) can work together with the mentioned direct strategies for reducing overestimation , for example , clipped double Q-learning . Then , a new actor-critic method called Error Controlled Actor-critic ( ECAC ) is established by adopting an effective operation that minimizes the KL divergence to keep the upper bound as low as possible . In other words , this method ensures the similarity between every two consecutive polices in training process and reduces the optimization difficulty of value function , so that the error in Q function approximators can be decreased . Ablation studies were performed to examine the effectiveness of our proposed strategy for decreasing the approximation error , and comparative evaluations were conducted to verify that our method can outperform other mainstream RL algorithms . The main contributions of this paper are summarized as follow : ( 1 ) We presented an upper boundary of the approximation error in Q function approximator ; ( 2 ) We proposed a practical actor-critic method—ECAC which decreases the approximation error by restricting the KL divergence between every two consecutive policies and adopt a mechanism to automatically adjust the coefficient of KL term . 2 PRELIMINARIES . 2.1 REINFORCEMENT LEARNING . Reinforcement learning ( RL ) algorithms are modeled as a mathematical framework called Markov Decision Process ( MDP ) . In each time-step of MDP , an agent generates an action based on current state of its environment , then receives a reward and a new state from the environment . Environmental state and agent ’ s action at time t are denoted as st ∈ S and at ∈ A , respectively ; S and A denote the state and action spaces respectively , which may be either discrete or continuous . Environment is described by a reward function , r ( st , at ) , and a transition probability distribution , Pr ( st+1 = s′|st = s , at = a ) . Transition probability distribution specifies the probability that the environment will transition to next state . Initial state distribution is denoted as Pr0 ( s ) . Let π denotes a policy , η ( π ) denotes its expected discounted rewards : η ( π ) = Eπ [ R1 + γR2 + γ 2R3 + · · · ] = Eπ [ ∞∑ t=0 γkRt+1 ] , ( 1 ) where γ denotes a discount rate and 0 ≤ γ ≤ 1 . The goal of RL is to find a policy , π∗ , that maximizes a performance function over policy , J ( π ) , which measures the performance of policy : π∗ = argmax π J ( π ) . ( 2 ) A natural form of J ( π ) is η ( π ) . Different interpretations of this optimization goal lead to different routes to the their solutions . Almost all reinforcement learning algorithms involve estimating value functions , including statevalue and action-value functions . State-value function , V π ( s ) , gives the expected sum of discounted reward when starting in s and following a given policy , π. V π ( s ) specified by : V π ( s ) = Eπ [ ∞∑ k=0 γkRt+k+1|st = s ] . ( 3 ) Similarly , action-value function , Qπ ( s , a ) , is given by : Qπ ( s , a ) = Eπ [ ∞∑ k=0 γkRt+k+1|st = s , at = a ] . ( 4 ) 2.2 ACTOR-CRITIC ARCHITECTURE . To avoid confusion , by default , we discuss only RL methods with function approximation in this section . RL methods can be roughly divided into three categories : 1 ) value-based , 2 ) policy-based , and 3 ) actor-critic methods . Value-based method only learn value functions ( state-value or action-value functions ) , and have the advantage of fast convergence . Policy-based methods primarily learn parameterized policies . A parameterized policy ( with parameter vector , θ ) is either a distribution over actions given a state , πθ ( a|s ) , or a deterministic function , a = πθ ( s ) . Their basic update is θn+1 = θn+α∇J ( θn ) where is learning rate . Policy based methods show better convergence guarantees but have high variance in gradient estimates . Actor-critic methods learn both value functions and policies and use value functions to improve policies . In this way , they trade off small bias in gradient estimates to low variance in gradient estimates . Actor-critic architecture ( Peters & Schaal , 2008 ; Degris et al. , 2013 ; Sutton & Barto , 2018 ) consists of two components : actor and critic modules . Critic module learns learns state-value function , Vφ ( s ) , or action-value function , Qφ ( s , a ) or both of them , usually by temporal-difference ( TD ) methods . Actor module learns a stochastic policy , πθ ( a|s ) , or a deterministic policy , a = πθ ( s ) , and utilizes value function to improve the policy . For example , in actor module of DDPG ( Lillicrap et al. , 2016 ) , the policy is updated by using the following performance function J ( θ ) = Eπθ [ Qφ ( st , πθ ( st ) ) ] , ( 5 ) where πθ ( st ) is a deterministic policy . 2.3 DOMAIN ADAPTATION . Domain adaptation is a task which aims at adapting a well performing model from a source domain to a different target domain . It is used to describe the task of critic module in section 3.2 . The learning task of critic module is viewed as adapting a learned Q function approximator to next one , and the target error equates to the approximation error at current iteration of critic update . Here , we present some concepts in domain adaptation , including domain , source error , and target error . Domain is defined as a specific pair consisting of a distribution , P , on an input space , X , and a labeling function , f : X → R. In domain adaption , source and target domains are denoted as 〈PS , fS〉 and 〈PT , fT 〉 , respectively . A function , h : X → R , is defined as a hypothesis . Source error is the difference between a hypothesis , h ( x ) , and a labeling function of source domain , fS ( x ) , on a source distribution which is denoted as follow : eS ( h , fS ) = E x∼PS [ |h ( x ) − fS ( x ) | ] . ( 6 ) Target error is the difference between a hypothesis , h ( x ) , and a labeling function of target domain , fT ( x ) , on a target distribution which is denoted as follow : eT ( h , fT ) = E x∼PT [ |h ( x ) − fT ( x ) | ] . ( 7 ) For convenience , we use the shorthand eS ( h ) = eS ( h , fS ) and eT ( h ) = eT ( h , fT ) .
In this paper, the authors study the error introduced by the estimation of critic function in the Actor-Critic algorithm. Then the author proposed an algorithm that utilizes the idea of double Q learning and using a KL-divergence like regularization method to control this error. Experimentally the proposed algorithm achieves good results comparing to the vanilla Actor-Critic algorithm. This paper shows a successful routine from the theoretical analysis to a practical algorithm.
SP:343ef3ab797100bbd2d8bc91a6fe9d05a67a897a
Error Controlled Actor-Critic Method to Reinforcement Learning
1 INTRODUCTION . Reinforcement learning ( RL ) algorithms are combined with function approximation methods to adapt to the application scenarios whose state spaces are combinatorial , large , or even continuous . Many function approximation methods RL methods , including the Fourier basis ( Konidaris et al. , 2011 ) , kernel regression ( Xu , 2006 ; Barreto et al. , 2011 ; Bhat et al. , 2012 ) , and neural neworks ( Barto et al. , 1982 ; Tesauro , 1992 ; Boyan et al. , 1992 ; Gullapalli , 1992 ) have been used to learn value functions . In recent years , many deep reinforcement learning ( DRL ) methods were implemented by incorporating deep learning into RL methods . Deep Q-learning Network ( DQN ) ( Mnih et al. , 2013 ) reported by Mnih in 2013 is a typical work that uses a deep convolutional neural network ( CNN ) to represent a suitable action value function estimating future rewards ( returns ) ; it successfully learned end-to-end control policies for seven Atari 2600 games directly from large state spaces . Thereafter , deep RL methods , such as Deep Deterministic Policy Gradient ( DDPG ) ( Lillicrap et al. , 2016 ) , Proximal Policy Optimization ( PPO ) ( Schulman et al. , 2017 ) , Twin Delayed Deep Deterministic policy gradient ( TD3 ) ( Fujimoto et al. , 2018 ) , and Soft Actor-Critic ( SAC ) ( Haarnoja et al. , 2018 ) , started to become mainstream in the field of RL . Althouth function approximation methods have assisted reinforcement learning ( RL ) algorithms to perform well in complex problems by providing great representation power ; however , they also cause an issue called overestimation phenomenon that jeopardize the optimization process of RL algorithms . Thrun & Schwartz ( 1993 ) presented a theoretical analysis of this systematic overestimation phenomenon in Q-learning methods that use function approximation methods . Similar problem persists in the actor-critic methods employed function approximation methods . Thomas ( 2014 ) reported that several natural actor-critic algorithms use biased estimates of policy gradient to update parameters when using function approximation to approximate the action value function . Fujimoto et al . ( 2018 ) proved that the value estimation in the deterministic policy gradient method also lead to overestimation problem . In brief , the approximation errors of value functions caused the inaccuracy of estimated values , and such inaccuracy induced the overestimation on value function ; so that poor performances might be assigned to high reward values . As a result , policies with poor performance might be obtained . Previous works attempted to find direct strategies to effectively reduce the overestimation . Hasselt ( 2010 ) proposed Double Q-learning , in which the samples are divided into two sets to train two independent Q-function estimators . To diminish the overestimation , one Q-function estimator is used to select actions , and the other one is applied to estimate its value . Fujimoto et al . ( 2018 ) proposed mechanisms , including clipped double Q-learning and delayed policy updates , to minimize the overestimation . In contrast to these methods , we focus on actor-critic setting and manage to reduce the approximation error of value function , which is the source of the overestimation , in an indirect but effective way . We use the concepts of domain adaptation ( Ben-David et al. , 2010 ) to derive an upper boundary of the approximation error in Q function approximator . Then , we find that the least upper bound of this error can be obtained by minimizing the Kullback-Leibler divergence ( KL divergence ) between new policy and its previous one . This means minimizing the KL divergence when traning policy can stabilize the critic and then confine the approximation error in Q function . Interestingly , we arrive at similar conclusion as two literatures Geist et al . ( 2019 ) ; Vieillard et al . ( 2020 ) by a somewhat different route . In their works , the authors directly studied the effect of KL and entropy regularization in RL and proved that a KL regularization indeed leads to averaging errors made at each iteration of value function update . While our idea is very different from theirs : It is impracticable to minimize the approximation error directly , so instead we try to minimize an upper bound of approximation error . This is similar to Expectation-Maximization Algorithm ( Bishop , 2006 ) which maximize a lower bound of log-likelihood instead of log-likelihood directly . We derive an upper boundary of approximation error for Q function approximatorin actor-critic methods , and arrive at a more general conclusion : approximation error can be reduced by keep new policy close to the previous one . Note that KL penalty is a effective way , but not the only way . Furthermore , the mentioned indirect operation ( i.e . the KL penalty ) can work together with the mentioned direct strategies for reducing overestimation , for example , clipped double Q-learning . Then , a new actor-critic method called Error Controlled Actor-critic ( ECAC ) is established by adopting an effective operation that minimizes the KL divergence to keep the upper bound as low as possible . In other words , this method ensures the similarity between every two consecutive polices in training process and reduces the optimization difficulty of value function , so that the error in Q function approximators can be decreased . Ablation studies were performed to examine the effectiveness of our proposed strategy for decreasing the approximation error , and comparative evaluations were conducted to verify that our method can outperform other mainstream RL algorithms . The main contributions of this paper are summarized as follow : ( 1 ) We presented an upper boundary of the approximation error in Q function approximator ; ( 2 ) We proposed a practical actor-critic method—ECAC which decreases the approximation error by restricting the KL divergence between every two consecutive policies and adopt a mechanism to automatically adjust the coefficient of KL term . 2 PRELIMINARIES . 2.1 REINFORCEMENT LEARNING . Reinforcement learning ( RL ) algorithms are modeled as a mathematical framework called Markov Decision Process ( MDP ) . In each time-step of MDP , an agent generates an action based on current state of its environment , then receives a reward and a new state from the environment . Environmental state and agent ’ s action at time t are denoted as st ∈ S and at ∈ A , respectively ; S and A denote the state and action spaces respectively , which may be either discrete or continuous . Environment is described by a reward function , r ( st , at ) , and a transition probability distribution , Pr ( st+1 = s′|st = s , at = a ) . Transition probability distribution specifies the probability that the environment will transition to next state . Initial state distribution is denoted as Pr0 ( s ) . Let π denotes a policy , η ( π ) denotes its expected discounted rewards : η ( π ) = Eπ [ R1 + γR2 + γ 2R3 + · · · ] = Eπ [ ∞∑ t=0 γkRt+1 ] , ( 1 ) where γ denotes a discount rate and 0 ≤ γ ≤ 1 . The goal of RL is to find a policy , π∗ , that maximizes a performance function over policy , J ( π ) , which measures the performance of policy : π∗ = argmax π J ( π ) . ( 2 ) A natural form of J ( π ) is η ( π ) . Different interpretations of this optimization goal lead to different routes to the their solutions . Almost all reinforcement learning algorithms involve estimating value functions , including statevalue and action-value functions . State-value function , V π ( s ) , gives the expected sum of discounted reward when starting in s and following a given policy , π. V π ( s ) specified by : V π ( s ) = Eπ [ ∞∑ k=0 γkRt+k+1|st = s ] . ( 3 ) Similarly , action-value function , Qπ ( s , a ) , is given by : Qπ ( s , a ) = Eπ [ ∞∑ k=0 γkRt+k+1|st = s , at = a ] . ( 4 ) 2.2 ACTOR-CRITIC ARCHITECTURE . To avoid confusion , by default , we discuss only RL methods with function approximation in this section . RL methods can be roughly divided into three categories : 1 ) value-based , 2 ) policy-based , and 3 ) actor-critic methods . Value-based method only learn value functions ( state-value or action-value functions ) , and have the advantage of fast convergence . Policy-based methods primarily learn parameterized policies . A parameterized policy ( with parameter vector , θ ) is either a distribution over actions given a state , πθ ( a|s ) , or a deterministic function , a = πθ ( s ) . Their basic update is θn+1 = θn+α∇J ( θn ) where is learning rate . Policy based methods show better convergence guarantees but have high variance in gradient estimates . Actor-critic methods learn both value functions and policies and use value functions to improve policies . In this way , they trade off small bias in gradient estimates to low variance in gradient estimates . Actor-critic architecture ( Peters & Schaal , 2008 ; Degris et al. , 2013 ; Sutton & Barto , 2018 ) consists of two components : actor and critic modules . Critic module learns learns state-value function , Vφ ( s ) , or action-value function , Qφ ( s , a ) or both of them , usually by temporal-difference ( TD ) methods . Actor module learns a stochastic policy , πθ ( a|s ) , or a deterministic policy , a = πθ ( s ) , and utilizes value function to improve the policy . For example , in actor module of DDPG ( Lillicrap et al. , 2016 ) , the policy is updated by using the following performance function J ( θ ) = Eπθ [ Qφ ( st , πθ ( st ) ) ] , ( 5 ) where πθ ( st ) is a deterministic policy . 2.3 DOMAIN ADAPTATION . Domain adaptation is a task which aims at adapting a well performing model from a source domain to a different target domain . It is used to describe the task of critic module in section 3.2 . The learning task of critic module is viewed as adapting a learned Q function approximator to next one , and the target error equates to the approximation error at current iteration of critic update . Here , we present some concepts in domain adaptation , including domain , source error , and target error . Domain is defined as a specific pair consisting of a distribution , P , on an input space , X , and a labeling function , f : X → R. In domain adaption , source and target domains are denoted as 〈PS , fS〉 and 〈PT , fT 〉 , respectively . A function , h : X → R , is defined as a hypothesis . Source error is the difference between a hypothesis , h ( x ) , and a labeling function of source domain , fS ( x ) , on a source distribution which is denoted as follow : eS ( h , fS ) = E x∼PS [ |h ( x ) − fS ( x ) | ] . ( 6 ) Target error is the difference between a hypothesis , h ( x ) , and a labeling function of target domain , fT ( x ) , on a target distribution which is denoted as follow : eT ( h , fT ) = E x∼PT [ |h ( x ) − fT ( x ) | ] . ( 7 ) For convenience , we use the shorthand eS ( h ) = eS ( h , fS ) and eT ( h ) = eT ( h , fT ) .
Authors investigated the effect of approximation error for actor-critic. They derived an upper bound of approximation showing that minimizing the KL divergence between the two consecutive policies can drive this upper bound down. Based on their finding they introduced the Error Controlled Actor-critic (ECAC) algorithm. They ran ablation study showing the positive impact of minimizing the KL divergence. Furthermore they compared ECAC against 4 state-of-the-art techniques showing their advantage across 4 out of 5 Mujoco domains.
SP:343ef3ab797100bbd2d8bc91a6fe9d05a67a897a
Open-world Semi-supervised Learning
1 INTRODUCTION . With the advent of deep learning , remarkable breakthroughs have been achieved and current machine learning systems excel on tasks with large quantities of labeled data ( Hinton et al. , 2012 ; LeCun et al. , 2015 ; Silver et al. , 2016 ; Esteva et al. , 2017 ) . Despite the strengths , the vast majority of models are designed for the closed-world setting rooted in the assumption that training and test data come from the same set of predefined classes ( Bendale & Boult , 2015 ; Boult et al. , 2019 ) . This assumption , however , rarely holds for data in-the-wild , as labeling data depends on having the complete knowledge of a given domain , which is rarely the case in practice . For example , biologists may prelabel some of the known cell types ( seen classes ) , and then want to train and apply the model to a new tissue to identify known cell types but also to discover novel previously unknown cell types ( unseen classes ) . Similarly , in social networks one may want to classify users into predefined interest groups while also discovering new unknown/unlabeled interests of users . Thus , in contrast to the commonly assumed closed world , many real-world problems are inherently open-world—new classes can emerge in the test data that may have never been seen ( and labeled ) during training . Here we introduce open-world semi-supervised learning ( open-world SSL ) setting that generalizes semi-supervised learning and novel class discovery . Under open-world SSL , we are given a labeled training dataset as well as an unlabeled dataset . The labeled dataset contains instances that belong to a set of seen classes , while instances in the unlabeled/test dataset belong to both the seen classes as well as to an unknown number of unseen classes ( Figure 1 ) . Under this setting , the model then needs to simultaneously classify instances into previously seen classes and then also discover new classes and assign instances to them . In other words , open-world SSL is a transductive learning setting under class distribution mismatch in which unlabeled test set may contain classes that have never been labeled during training , i.e. , are not part of the labeled training set . Given unlabeled test set , the model needs to either assign instances to one of the classes previously seen in the labeled set , or form a novel class and assign instances to it . Open-world SSL is fundamentally different but closely related to two recent lines of work : robust semi-supervised learning ( SSL ) and novel class discovery . Robust SSL ( Oliver et al. , 2018 ; Guo et al. , 2020 ; Chen et al. , 2020b ; Guo et al. , 2020 ; Yu et al. , 2020 ) assumes class distribution mismatch between labeled and unlabeled data , but in this setting the model only needs to be able to recognize ( reject ) instances belonging to novel classes in the unlabeled data as out-of-distribution instances . In contrast , instead of rejecting instances belonging to novel classes , open-world SSL aims at discovering individual novel classes and then assigning instances to them . Novel class discovery ( Hsu et al. , 2018 ; 2019 ; Han et al. , 2019 ; 2020 ; Zhong et al. , 2021 ) is a clustering problem where one assumes unlabeled data is composed only of novel classes . In contrast , open-world SSL is more general as instances in the unlabeled datan can come from seen as well as from novel classes . To apply robust SSL and novel class discovery methods to open-world SSL , one could in principle adopt a multi-step approach , where one could first use robust SSL to first reject examples from novel classes and then apply a novel class discovery method on rejected instances to discover novel classes . An alternative would be that one could treat all classes as “ novel ” , apply novel class discovery methods and then match some of the classes back to the seen classes in the labeled dataset . However , our experiments show that such ad hoc approaches do not perform well in practice . Therefore , it is necessary to design a method that can solve this practical problem in an end-to-end framework . In this paper we propose ORCA ( Open-woRld with unCertainty based Adaptive margin ) that operates under the novel open-world SSL setting . ORCA effectively assigns examples from the unlabeled data to either previously seen classes , or forms a novel classes by grouping similar instances . ORCA is an end-to-end deep learning framework , where the key to our approach is a novel uncertainty adaptive margin mechanism that gradually decreases plasticity and increases discriminability of the model during training . This mechanism effectively reduces an undesired gap between intra-class variance of seen with respect to the novel classes caused by learning seen classes faster than the novel , which we show is a critical difficulty in this setting . We then develop a special model training procedure that learns to classify data points into a set of previously seen classes while also learning to use an additional classification head for each newly discovered class . Classification heads for seen classes are used to assign the unlabeled examples to classes from the labeled set , while activating additional classification heads allows ORCA to form a novel class . ORCA does not need to know the number of novel classes ahead of time and can automatically discover them at the deployment time . We evaluate ORCA on three benchmark image classification datasets adapted for open-world SSL and a single-cell dataset from biology domain . Since no existing methods can operate under the open-world SSL setting we first extend existing state-of-the-art SSL , open-set recognition and novel class discovery methods to the open-world SSL setting and then compare them to ORCA . Experimental results demonstrate that ORCA effectively addresses the challenges of open-world SSL and consistently outperforms all baselines by a large margin . Specifically , ORCA achieves 25 % and 96 % improvements on seen and novel classes of the ImageNet dataset . Moreover , we show that ORCA is robust to unknown number of novel classes , different distributions of seen and novel classes , unbalanced data distributions , pretraining strategies and a small number of labeled examples . 2 RELATED WORK . We summarize similarities and differences between open-world SSL and related settings . Additional related work is given in Appendix A . Novel class discovery . In novel class discovery ( Hsu et al. , 2018 ; Han et al. , 2020 ; Brbic et al. , 2020 ; Zhong et al. , 2021 ) , the task is to cluster unlabeled dataset consisting of similar , but completely disjoint , classes than those present in the labeled dataset which is utilized to learn better representation for clustering . These methods assume that at the test time all the classes are novel . While these methods are able to discover novel classes , they do not recognize the seen/known classes . On the contrary , our open-world SSL is more general because unlabeled test set consists of novel classes but also classes previously seen in the labeled data that need to be identified . In principle , one could extend novel class discovery methods by treating all classes as “ novel ” at test time and then matching some of them to the known classes from the labeled dataset . We adopt such approaches as our baselines , but our experiments show that they do not perform well in practice . Semi-supervised learning ( SSL ) . SSL methods ( Chapelle et al. , 2009 ; Kingma et al. , 2014 ; Laine & Aila , 2017 ; Zhai et al. , 2019 ; Lee , 2013 ; Xie et al. , 2020 ; Berthelot et al. , 2019 ; 2020 ; Sohn et al. , 2020 ) assume closed-world setting in which labeled and unlabeled data come from the same set of classes . Robust SSL methods ( Oliver et al. , 2018 ; Chen et al. , 2020b ; Guo et al. , 2020 ; Yu et al. , 2020 ) relax the SSL assumption by assuming that instances from novel classes may appear in the unlabeled test set . The goal in robust SSL is to reject instances from novel classes which are treated as out-of-distribution instances . Instead of rejecting instances from novel classes , in open-world SSL the goal is to discover individual novel classes and then assign datapoints to them . To extend robust SSL to open-world SSL , one could take discarded points and then apply clustering/novel class discovery . Early work ( Miller & Browning , 2003 ) considered solving the problem in such a way using extension of the EM algorithm . However , our experiments show that by discarding the instances the embedding learned by these methods does not allow for accurate discovery of novel classes . Open-set and open-world recognition . Open-set recognition ( Scheirer et al. , 2012 ; Geng et al. , 2020 ; Bendale & Boult , 2016 ; Ge et al. , 2017 ; Sun et al. , 2020a ) considers the inductive setting in which novel classes can appear during testing , and the model needs to reject instances from novel classes . To extend these methods to open-world setting , we include a baseline that discovers classes on rejected instances . However , results show that such approaches can not effectively address the challenges of open-world SSL . Similarly , open-world recognition approaches ( Bendale & Boult , 2015 ; Rudd et al. , 2017 ; Boult et al. , 2019 ) require the system to incrementally learn and extend the set of known classes with novel classes . These methods incrementally label novel classes by human-in-the-loop . In contrast , open-world SSL leverages unlabeled data in the learning stage and does not require human-in-the-loop . Generalized zero-shot learning ( GZSL ) . Like open-world SSL , GZSL ( Xian et al. , 2017 ; Liu et al. , 2018 ; Chao et al. , 2016 ) assumes that classes seen in the labeled set and novel classes are present at the test time . However , GZSL imposes additional assumption about availability of prior knowledge given as auxiliary attributes that uniquely describe each individual class including the novel classes . This restrictive assumption severely limits the application of GZSL methods in practice . In contrast , open-world SSL is more general as it does not assume any prior information about classes . 3 PROPOSED APPROACH . In this section , we first define the open-world SSL setting . We follow by an overview of ORCA framework and then introduce each of the components of our framework in details . 3.1 OPEN-WORLD SEMI-SUPERVISED LEARNING SETTING . In the open-world SSL , we assume transductive learning setting in which a labeled part of the dataset Dl = { ( xi , yi ) } mi=1 and an unlabeled part of the dataset Du = { ( xi ) } ni=1 are given at the input . We denote the set of classes seen in the labeled data as Cl and set of classes in the unlabeled test data as Cu , respectively . In the open-world SSL , we assume category/class shift , i.e. , Cl ∩ Cu 6= ∅ and Cl 6= Cu . We consider Cs = Cl ∩ Cu as a set of seen classes , and Cn = Cu\Cl as a set of novel classes . Definition 1 ( Open-world SSL ) . In the open-world SSL , the model needs to assign instances from Du either to previously seen classes Cs , or form a novel class c ∈ Cn and assign datapoints to it . Note that open-world SSL generalizes novel class discovery and traditional ( closed-world ) SSL . Novel class discovery assumes that classes in labeled and unlabeled data are disjoint , i.e. , Cl∩Cu = ∅ , while ( closed-world ) SSL assumes the same classes in labeled and unlabeled data , i.e. , Cl = Cu .
The paper considers open-world SSL settings where the model recognizes previously seen classes, and detects novel classes which are not present in the labeled dataset. The method contains three losses to train a model in this setting: a) supervised loss on labeled data, b) unsupervised loss on unlabeled data from pseudo-labels obtained from confident pairwise similarities, and c) regularization term that avoids assigning all the unlabeled samples to the same class. Then the paper evaluates the effectiveness of the method on CIFAR-10/100 and ImageNet-100 datasets.
SP:281bc59d639aa76d84921b3ec4ce1ee8f1ba5b51
Open-world Semi-supervised Learning
1 INTRODUCTION . With the advent of deep learning , remarkable breakthroughs have been achieved and current machine learning systems excel on tasks with large quantities of labeled data ( Hinton et al. , 2012 ; LeCun et al. , 2015 ; Silver et al. , 2016 ; Esteva et al. , 2017 ) . Despite the strengths , the vast majority of models are designed for the closed-world setting rooted in the assumption that training and test data come from the same set of predefined classes ( Bendale & Boult , 2015 ; Boult et al. , 2019 ) . This assumption , however , rarely holds for data in-the-wild , as labeling data depends on having the complete knowledge of a given domain , which is rarely the case in practice . For example , biologists may prelabel some of the known cell types ( seen classes ) , and then want to train and apply the model to a new tissue to identify known cell types but also to discover novel previously unknown cell types ( unseen classes ) . Similarly , in social networks one may want to classify users into predefined interest groups while also discovering new unknown/unlabeled interests of users . Thus , in contrast to the commonly assumed closed world , many real-world problems are inherently open-world—new classes can emerge in the test data that may have never been seen ( and labeled ) during training . Here we introduce open-world semi-supervised learning ( open-world SSL ) setting that generalizes semi-supervised learning and novel class discovery . Under open-world SSL , we are given a labeled training dataset as well as an unlabeled dataset . The labeled dataset contains instances that belong to a set of seen classes , while instances in the unlabeled/test dataset belong to both the seen classes as well as to an unknown number of unseen classes ( Figure 1 ) . Under this setting , the model then needs to simultaneously classify instances into previously seen classes and then also discover new classes and assign instances to them . In other words , open-world SSL is a transductive learning setting under class distribution mismatch in which unlabeled test set may contain classes that have never been labeled during training , i.e. , are not part of the labeled training set . Given unlabeled test set , the model needs to either assign instances to one of the classes previously seen in the labeled set , or form a novel class and assign instances to it . Open-world SSL is fundamentally different but closely related to two recent lines of work : robust semi-supervised learning ( SSL ) and novel class discovery . Robust SSL ( Oliver et al. , 2018 ; Guo et al. , 2020 ; Chen et al. , 2020b ; Guo et al. , 2020 ; Yu et al. , 2020 ) assumes class distribution mismatch between labeled and unlabeled data , but in this setting the model only needs to be able to recognize ( reject ) instances belonging to novel classes in the unlabeled data as out-of-distribution instances . In contrast , instead of rejecting instances belonging to novel classes , open-world SSL aims at discovering individual novel classes and then assigning instances to them . Novel class discovery ( Hsu et al. , 2018 ; 2019 ; Han et al. , 2019 ; 2020 ; Zhong et al. , 2021 ) is a clustering problem where one assumes unlabeled data is composed only of novel classes . In contrast , open-world SSL is more general as instances in the unlabeled datan can come from seen as well as from novel classes . To apply robust SSL and novel class discovery methods to open-world SSL , one could in principle adopt a multi-step approach , where one could first use robust SSL to first reject examples from novel classes and then apply a novel class discovery method on rejected instances to discover novel classes . An alternative would be that one could treat all classes as “ novel ” , apply novel class discovery methods and then match some of the classes back to the seen classes in the labeled dataset . However , our experiments show that such ad hoc approaches do not perform well in practice . Therefore , it is necessary to design a method that can solve this practical problem in an end-to-end framework . In this paper we propose ORCA ( Open-woRld with unCertainty based Adaptive margin ) that operates under the novel open-world SSL setting . ORCA effectively assigns examples from the unlabeled data to either previously seen classes , or forms a novel classes by grouping similar instances . ORCA is an end-to-end deep learning framework , where the key to our approach is a novel uncertainty adaptive margin mechanism that gradually decreases plasticity and increases discriminability of the model during training . This mechanism effectively reduces an undesired gap between intra-class variance of seen with respect to the novel classes caused by learning seen classes faster than the novel , which we show is a critical difficulty in this setting . We then develop a special model training procedure that learns to classify data points into a set of previously seen classes while also learning to use an additional classification head for each newly discovered class . Classification heads for seen classes are used to assign the unlabeled examples to classes from the labeled set , while activating additional classification heads allows ORCA to form a novel class . ORCA does not need to know the number of novel classes ahead of time and can automatically discover them at the deployment time . We evaluate ORCA on three benchmark image classification datasets adapted for open-world SSL and a single-cell dataset from biology domain . Since no existing methods can operate under the open-world SSL setting we first extend existing state-of-the-art SSL , open-set recognition and novel class discovery methods to the open-world SSL setting and then compare them to ORCA . Experimental results demonstrate that ORCA effectively addresses the challenges of open-world SSL and consistently outperforms all baselines by a large margin . Specifically , ORCA achieves 25 % and 96 % improvements on seen and novel classes of the ImageNet dataset . Moreover , we show that ORCA is robust to unknown number of novel classes , different distributions of seen and novel classes , unbalanced data distributions , pretraining strategies and a small number of labeled examples . 2 RELATED WORK . We summarize similarities and differences between open-world SSL and related settings . Additional related work is given in Appendix A . Novel class discovery . In novel class discovery ( Hsu et al. , 2018 ; Han et al. , 2020 ; Brbic et al. , 2020 ; Zhong et al. , 2021 ) , the task is to cluster unlabeled dataset consisting of similar , but completely disjoint , classes than those present in the labeled dataset which is utilized to learn better representation for clustering . These methods assume that at the test time all the classes are novel . While these methods are able to discover novel classes , they do not recognize the seen/known classes . On the contrary , our open-world SSL is more general because unlabeled test set consists of novel classes but also classes previously seen in the labeled data that need to be identified . In principle , one could extend novel class discovery methods by treating all classes as “ novel ” at test time and then matching some of them to the known classes from the labeled dataset . We adopt such approaches as our baselines , but our experiments show that they do not perform well in practice . Semi-supervised learning ( SSL ) . SSL methods ( Chapelle et al. , 2009 ; Kingma et al. , 2014 ; Laine & Aila , 2017 ; Zhai et al. , 2019 ; Lee , 2013 ; Xie et al. , 2020 ; Berthelot et al. , 2019 ; 2020 ; Sohn et al. , 2020 ) assume closed-world setting in which labeled and unlabeled data come from the same set of classes . Robust SSL methods ( Oliver et al. , 2018 ; Chen et al. , 2020b ; Guo et al. , 2020 ; Yu et al. , 2020 ) relax the SSL assumption by assuming that instances from novel classes may appear in the unlabeled test set . The goal in robust SSL is to reject instances from novel classes which are treated as out-of-distribution instances . Instead of rejecting instances from novel classes , in open-world SSL the goal is to discover individual novel classes and then assign datapoints to them . To extend robust SSL to open-world SSL , one could take discarded points and then apply clustering/novel class discovery . Early work ( Miller & Browning , 2003 ) considered solving the problem in such a way using extension of the EM algorithm . However , our experiments show that by discarding the instances the embedding learned by these methods does not allow for accurate discovery of novel classes . Open-set and open-world recognition . Open-set recognition ( Scheirer et al. , 2012 ; Geng et al. , 2020 ; Bendale & Boult , 2016 ; Ge et al. , 2017 ; Sun et al. , 2020a ) considers the inductive setting in which novel classes can appear during testing , and the model needs to reject instances from novel classes . To extend these methods to open-world setting , we include a baseline that discovers classes on rejected instances . However , results show that such approaches can not effectively address the challenges of open-world SSL . Similarly , open-world recognition approaches ( Bendale & Boult , 2015 ; Rudd et al. , 2017 ; Boult et al. , 2019 ) require the system to incrementally learn and extend the set of known classes with novel classes . These methods incrementally label novel classes by human-in-the-loop . In contrast , open-world SSL leverages unlabeled data in the learning stage and does not require human-in-the-loop . Generalized zero-shot learning ( GZSL ) . Like open-world SSL , GZSL ( Xian et al. , 2017 ; Liu et al. , 2018 ; Chao et al. , 2016 ) assumes that classes seen in the labeled set and novel classes are present at the test time . However , GZSL imposes additional assumption about availability of prior knowledge given as auxiliary attributes that uniquely describe each individual class including the novel classes . This restrictive assumption severely limits the application of GZSL methods in practice . In contrast , open-world SSL is more general as it does not assume any prior information about classes . 3 PROPOSED APPROACH . In this section , we first define the open-world SSL setting . We follow by an overview of ORCA framework and then introduce each of the components of our framework in details . 3.1 OPEN-WORLD SEMI-SUPERVISED LEARNING SETTING . In the open-world SSL , we assume transductive learning setting in which a labeled part of the dataset Dl = { ( xi , yi ) } mi=1 and an unlabeled part of the dataset Du = { ( xi ) } ni=1 are given at the input . We denote the set of classes seen in the labeled data as Cl and set of classes in the unlabeled test data as Cu , respectively . In the open-world SSL , we assume category/class shift , i.e. , Cl ∩ Cu 6= ∅ and Cl 6= Cu . We consider Cs = Cl ∩ Cu as a set of seen classes , and Cn = Cu\Cl as a set of novel classes . Definition 1 ( Open-world SSL ) . In the open-world SSL , the model needs to assign instances from Du either to previously seen classes Cs , or form a novel class c ∈ Cn and assign datapoints to it . Note that open-world SSL generalizes novel class discovery and traditional ( closed-world ) SSL . Novel class discovery assumes that classes in labeled and unlabeled data are disjoint , i.e. , Cl∩Cu = ∅ , while ( closed-world ) SSL assumes the same classes in labeled and unlabeled data , i.e. , Cl = Cu .
The authors propose a method to tackle a new problem setting of semi-supervised learning, called an open-world semi-supervised learning, where the model is required to accurately discriminate known-class data as well as to appropriately discover unknown classes contained in an unlabeled dataset. The objective function to be minimized in the proposed method comprises three terms: unsupervised loss, supervised loss with uncertainty based adaptive margin, and entropy regularization. The experimental results with several datasets have validated the advantage of the proposed method.
SP:281bc59d639aa76d84921b3ec4ce1ee8f1ba5b51
Learning with Instance-Dependent Label Noise: A Sample Sieve Approach
1 INTRODUCTION . Deep neural networks ( DNNs ) have gained popularity in a wide range of applications . The remarkable success of DNNs often relies on the availability of large-scale datasets . However , data annotation inevitably introduces label noise , and it is extremely expensive and time-consuming to clean up the corrupted labels . The existence of label noise can weaken the true correlation between features and labels as well as introducing artificial correlation patterns . Thus , mitigating the effects of noisy labels becomes a critical issue that needs careful treatment . It is challenging to avoid overfitting to noisy labels , especially when the noise depends on both true labels Y and features X . Unfortunately , this often tends to be the case where human annotations are prone to different levels of errors for tasks with varying difficulty levels . Recent work has also shown that the presence of instance-dependent noisy labels imposes additional challenges and cautions to training in this scenario ( Liu , 2021 ) . For such instance-dependent ( or feature-dependent , instance-based ) label noise settings , theory-supported works usually focus on loss-correction which requires estimating noise rates ( Xia et al. , 2020 ; Berthon et al. , 2020 ) . Recent work by Cheng et al . ( 2020 ) addresses the bounded instance-based noise by first learning the noisy distribution and then distilling examples according to some thresholds.1 However , with a limited size of datasets , learning an accurate noisy distribution for each example is a non-trivial task . Additionally , the size and the quality of distilled examples are sensitive to the thresholds for distillation . ∗Equal contributions in alphabetical ordering . Hao leads experiments and Zhaowei leads theories . †Corresponding authors : Y. Liu and Z. Zhu { yangliu , zwzhu } @ ucsc.edu . 1The proposed solution is primarily studied for the binary case in Cheng et al . ( 2020 ) . Departing from the above line of works , we design a sample sieve with theoretical guarantees to provide a high-quality splitting of clean and corrupted examples without the need to estimate noise rates . Instead of learning the noisy distributions or noise rates , we focus on learning the underlying clean distribution and design a regularization term to help improve the confidence of the learned classifier , which is proven to help safely sieve out corrupted examples . With the division between “ clean ” and “ corrupted ” examples , our training enjoys performance improvements by treating the clean examples ( using standard loss ) and the corrupted ones ( using an unsupervised consistency loss ) separately . We summarize our main contributions : 1 ) We propose to train a classifier using a novel confidence regularization ( CR ) term and theoretically guarantee that , under mild assumptions , minimizing the confidence regularized cross-entropy ( CE ) loss on the instance-based noisy distribution is equivalent to minimizing the pure CE loss on the corresponding “ unobservable ” clean distribution . This classifier is also shown to be helpful for evaluating each example to build our sample sieve.2 ) We provide a theoretically sound sample sieve that simply compares the example ’ s regularized loss with a closed-form threshold explicitly determined by predictions from the above trained model using our confidence regularized loss , without any extra estimates . 3 ) To the best of our knowledge , the proposed CORES2 ( COnfidence REgularized Sample Sieve ) is the first method that is thoroughly studied for a multi-class classification problem , has theoretical guarantees to avoid overfitting to instance-dependent label noise , and provides high-quality division without knowing or estimating noise rates . 4 ) By decoupling the regularized loss into separate additive terms , we also provide a novel and promising mechanism for understanding and controlling the effects of general instancedependent label noise . 5 ) CORES2 achieves competitive performance on multiple datasets , including CIFAR-10 , CIFAR-100 , and Clothing1M , under different label noise settings . Other related works In addition to recent works by Xia et al . ( 2020 ) , Berthon et al . ( 2020 ) , and Cheng et al . ( 2020 ) , we briefly overview other most relevant references . Detailed related work is left to Appendix A . Making the loss function robust to label noise is important for building a robust machine learning model ( Zhang et al. , 2016 ) . One popular direction is to perform loss correction , which first estimates transition matrix ( Patrini et al. , 2017 ; Vahdat , 2017 ; Xiao et al. , 2015 ; Zhu et al. , 2021b ; Yao et al. , 2020b ) , and then performs correction/reweighting via forward or backward propagation , or further revises the estimated transition matrix with controllable variations ( Xia et al. , 2019 ) . The other line of work focuses on designing specific losses without estimating transition matrices ( Natarajan et al. , 2013 ; Xu et al. , 2019 ; Liu & Guo , 2020 ; Wei & Liu , 2021 ) . However , these works assume the label noise is instance-independent which limits their extension . Another approach is sample selection ( Jiang et al. , 2017 ; Han et al. , 2018 ; Yu et al. , 2019 ; Northcutt et al. , 2019 ; Yao et al. , 2020a ; Wei et al. , 2020 ; Zhang et al. , 2020a ) , which selects the “ small loss ” examples as clean ones . However , we find this approach only works well on the instance-independent label noise . Approaches such as label correction ( Veit et al. , 2017 ; Li et al. , 2017 ; Han et al. , 2019 ) or semi-supervised learning ( Li et al. , 2020 ; Nguyen et al. , 2019 ) also lack guarantees for the instancebased label noise . 2 CORES2 : CONFIDENCE REGULARIZED SAMPLE SIEVE Consider a classification problem on a set of N training examples denoted by D : = { ( xn , yn ) } n∈ [ N ] , where [ N ] : = { 1 , 2 , · · · , N } is the set of example indices . Examples ( xn , yn ) are drawn according to random variables ( X , Y ) ∈ X × Y from a joint distribution D. Let DX and DY be the marginal distributions of X and Y . The classification task aims to identify a classifier f : X → Y that maps X to Y accurately . One common approach is minimizing the empirical risk using DNNs with respect to the cross-entropy loss defined as ` ( f ( x ) , y ) = − ln ( fx [ y ] ) , y ∈ [ K ] , where fx [ y ] denotes the y-th component of f ( x ) andK is the number of classes . In real-world applications , such as human-annotated images ( Krizhevsky et al. , 2012 ; Zhang et al. , 2017 ) and medical diagnosis ( Agarwal et al. , 2016 ) , the learner can only observe a set of noisy labels . For instance , human annotators may wrongly label some images containing cats as ones that contain dogs accidentally or irresponsibly . The label noise of each instance is characterized by a noise transition matrix T ( X ) , where each element Tij ( X ) : = P ( Ỹ = j|Y = i , X ) . The corresponding noisy dataset2 and distribution are denoted by D̃ : = { ( xn , ỹn ) } n∈ [ N ] and D̃ . Let 1 ( · ) be the indicator function taking 2In this paper , the noisy dataset refers to a dataset with noisy examples . A noisy example is either a clean example ( whose label is true ) or a corrupted example ( whose label is wrong ) . value 1 when the specified condition is satisfied and 0 otherwise . Similar to the goals in surrogate loss ( Natarajan et al. , 2013 ) , LDMI ( Xu et al. , 2019 ) and peer loss ( Liu & Guo , 2020 ) , we aim to learn a classifier f from the noisy distribution D̃ which also minimizes P ( f ( X ) 6= Y ) , ( X , Y ) ∼ D. Beyond their results , we attempt to propose a theoretically sound approach addressing a general instance-based noise regime without knowing or estimating noise rates . 2.1 CONFIDENCE REGULARIZATION . In this section , we present a new confidence regularizer ( CR ) . Our design of the CR is mainly motivated by a recently proposed robust loss function called peer loss ( Liu & Guo , 2020 ) . For each example ( xn , ỹn ) , peer loss has the following form : ` PL ( f ( xn ) , ỹn ) : = ` ( f ( xn ) , ỹn ) − ` ( f ( xn1 ) , ỹn2 ) , where ( xn1 , ỹn1 ) and ( xn2 , ỹn2 ) are two randomly sampled and paired peer examples ( with replacement ) for n. Let Xn1 and Ỹn2 be the corresponding random variables . Note Xn1 , Ỹn2 are two independent and uniform random variables being each xn′ , n′ ∈ [ N ] and ỹn′ , n′ ∈ [ N ] with probability 1N respectively : P ( Xn1 = xn′ |D̃ ) = P ( Ỹn2 = yn′ |D̃ ) = 1 N , ∀n ′ ∈ [ N ] . Let DỸ |D̃ be the distribution of Ỹn2 given dataset D̃ . Peer loss then has the following equivalent form in expectation : 1 N ∑ n∈ [ N ] EXn1 , Ỹn2 |D̃ [ ` ( f ( xn ) , ỹn ) − ` ( f ( Xn1 ) , Ỹn2 ) ] = 1 N ∑ n∈ [ N ] [ ` ( f ( xn ) , ỹn ) − ∑ n′∈ [ N ] P ( Xn1 = xn′ |D̃ ) EDỸ |D̃ [ ` ( f ( xn′ ) , Ỹ ) ] ] = 1 N ∑ n∈ [ N ] [ ` ( f ( xn ) , ỹn ) − ED Ỹ |D̃ [ ` ( f ( xn ) , Ỹ ) ] ] . This result characterizes a new loss denoted by ` CA : ` CA ( f ( xn ) , ỹn ) : = ` ( f ( xn ) , ỹn ) − ED Ỹ |D̃ [ ` ( f ( xn ) , Ỹ ) ] . ( 1 ) Though not studied rigorously by Liu & Guo ( 2020 ) , we show , under conditions3 , ` CA defined in Eqn . ( 1 ) encourages confident predictions4 from f by analyzing the gradients : Theorem 1 . For ` CA ( · ) , solutions satisfying fxn [ i ] > 0 , ∀i ∈ [ K ] are not locally optimal at ( xn , ỹn ) . See Appendix B.2 for the proof . Particularly , in binary cases , we have constraint f ( xn ) [ 0 ] + f ( xn ) [ 1 ] = 1 . Following Theorem 1 , we know minimizing ` CA ( f ( xn ) , ỹn ) w.r.t f under this constraint leads to either f ( xn ) [ 0 ] → 1 or f ( xn ) [ 1 ] → 1 , indicating confident predictions . Therefore , the addition of term −ED Ỹ |D̃ [ ` ( f ( xn ) , Ỹ ) ] helps improve the confidence of the learned classifier . Inspired by the above observation , we define the following confidence regularizer : Confidence Regularizer : ` CR ( f ( xn ) ) : = −β · ED Ỹ |D̃ [ ` ( f ( xn ) , Ỹ ) ] , where β is positive and ` ( · ) refers to the CE loss . The prior probability P ( Ỹ |D̃ ) is counted directly from the noisy dataset . In the remaining of this paper , ` ( · ) indicates the CE loss by default . Why are confident predictions important ? Intuitively , when model fits to the label noise , its predictions often become less confident , since the noise usually corrupts the signal encoded in the clean data . From this perspective , encouraging confident predictions plays against fitting to label noise . Compared to instance-independent noise , the difficulties in estimating the instance-dependent noise rates largely prevent us from applying existing techniques . In addition , as shown by Manwani & Sastry ( 2013 ) , the 0-1 loss function is more robust to instance-based noise but hard to optimize with . To a certain degree , pushing confident predictions results in a differentiable loss function that approximates the 0-1 loss , and therefore restores the robustness property . Besides , as observed by Chatterjee ( 2020 ) and Zielinski et al . ( 2020 ) , gradients from similar examples would reinforce each other . When the overall label information is dominantly informative that Tii ( X ) > Tij ( X ) , DNNs 3Detailed conditions for Theorem 1 are specified at the end of our main contents . 4Our observation can also help partially explain the robustness property of peer loss ( Liu & Guo , 2020 ) . will receive more correct information statistically . Encouraging confident predictions would discourage the memorization of the noisy examples ( makes it hard for noisy labels to reduce the confidence of predictions ) , and therefore further facilitate DNNs to learn the ( clean ) dominant information . ` CR is NOT the entropy regularization Entropy regularization ( ER ) is a popular choice for improving confidence of the trained classifiers in the literature ( Tanaka et al. , 2018 ; Yi & Wu , 2019 ) . Given a particular prediction probability p for a class , the ER term is based on the function −p ln p , while our ` CR is built on ln p. Later we show ` CR offers us favorable theoretical guarantees for training with instance-dependent label noise , while ER does not . In Appendix C.1 , we present both theoretical and experimental evidences that ` CR serves as a better regularizer compared to ER .
The paper introduces a noise-robust loss function CORES2, motivated by peer loss. The novel loss adds a regularization term that promotes confident prediction and pushes the model prediction away from the prior of the label. Using this loss function, the authors propose a dynamic sample sieve to separate the clean data and corrupted data on-the-fly, by the magnitude of CORES2 loss. The author's approach is to rule out samples whose losses are larger than an adaptive threshold. Importantly, the process of sieving successfully sieves out corrupted samples, both in a theory of 'better than random guess classifier' and in practice. The authors, then show that the proposed CORES2 can be decoupled under the instance-dependent noise setting. Then CORES2 is proved to be noise-robust, which means CORES2 is equivalent to minimizing the original cross-entropy loss. They also show a principle approach for finding the hyperparameters $\beta$. Further, a consistency loss is adopted after sample sieve on the corrupted samples. The author conducts extensive experiments, including CIFAR10, CIFAR100, and Clothing1M under different settings of noise. CORES2 achieves the SOTA results in all the experiments.
SP:b4e5b4a3546fdec14a958bbe0d387bce946396b0
Learning with Instance-Dependent Label Noise: A Sample Sieve Approach
1 INTRODUCTION . Deep neural networks ( DNNs ) have gained popularity in a wide range of applications . The remarkable success of DNNs often relies on the availability of large-scale datasets . However , data annotation inevitably introduces label noise , and it is extremely expensive and time-consuming to clean up the corrupted labels . The existence of label noise can weaken the true correlation between features and labels as well as introducing artificial correlation patterns . Thus , mitigating the effects of noisy labels becomes a critical issue that needs careful treatment . It is challenging to avoid overfitting to noisy labels , especially when the noise depends on both true labels Y and features X . Unfortunately , this often tends to be the case where human annotations are prone to different levels of errors for tasks with varying difficulty levels . Recent work has also shown that the presence of instance-dependent noisy labels imposes additional challenges and cautions to training in this scenario ( Liu , 2021 ) . For such instance-dependent ( or feature-dependent , instance-based ) label noise settings , theory-supported works usually focus on loss-correction which requires estimating noise rates ( Xia et al. , 2020 ; Berthon et al. , 2020 ) . Recent work by Cheng et al . ( 2020 ) addresses the bounded instance-based noise by first learning the noisy distribution and then distilling examples according to some thresholds.1 However , with a limited size of datasets , learning an accurate noisy distribution for each example is a non-trivial task . Additionally , the size and the quality of distilled examples are sensitive to the thresholds for distillation . ∗Equal contributions in alphabetical ordering . Hao leads experiments and Zhaowei leads theories . †Corresponding authors : Y. Liu and Z. Zhu { yangliu , zwzhu } @ ucsc.edu . 1The proposed solution is primarily studied for the binary case in Cheng et al . ( 2020 ) . Departing from the above line of works , we design a sample sieve with theoretical guarantees to provide a high-quality splitting of clean and corrupted examples without the need to estimate noise rates . Instead of learning the noisy distributions or noise rates , we focus on learning the underlying clean distribution and design a regularization term to help improve the confidence of the learned classifier , which is proven to help safely sieve out corrupted examples . With the division between “ clean ” and “ corrupted ” examples , our training enjoys performance improvements by treating the clean examples ( using standard loss ) and the corrupted ones ( using an unsupervised consistency loss ) separately . We summarize our main contributions : 1 ) We propose to train a classifier using a novel confidence regularization ( CR ) term and theoretically guarantee that , under mild assumptions , minimizing the confidence regularized cross-entropy ( CE ) loss on the instance-based noisy distribution is equivalent to minimizing the pure CE loss on the corresponding “ unobservable ” clean distribution . This classifier is also shown to be helpful for evaluating each example to build our sample sieve.2 ) We provide a theoretically sound sample sieve that simply compares the example ’ s regularized loss with a closed-form threshold explicitly determined by predictions from the above trained model using our confidence regularized loss , without any extra estimates . 3 ) To the best of our knowledge , the proposed CORES2 ( COnfidence REgularized Sample Sieve ) is the first method that is thoroughly studied for a multi-class classification problem , has theoretical guarantees to avoid overfitting to instance-dependent label noise , and provides high-quality division without knowing or estimating noise rates . 4 ) By decoupling the regularized loss into separate additive terms , we also provide a novel and promising mechanism for understanding and controlling the effects of general instancedependent label noise . 5 ) CORES2 achieves competitive performance on multiple datasets , including CIFAR-10 , CIFAR-100 , and Clothing1M , under different label noise settings . Other related works In addition to recent works by Xia et al . ( 2020 ) , Berthon et al . ( 2020 ) , and Cheng et al . ( 2020 ) , we briefly overview other most relevant references . Detailed related work is left to Appendix A . Making the loss function robust to label noise is important for building a robust machine learning model ( Zhang et al. , 2016 ) . One popular direction is to perform loss correction , which first estimates transition matrix ( Patrini et al. , 2017 ; Vahdat , 2017 ; Xiao et al. , 2015 ; Zhu et al. , 2021b ; Yao et al. , 2020b ) , and then performs correction/reweighting via forward or backward propagation , or further revises the estimated transition matrix with controllable variations ( Xia et al. , 2019 ) . The other line of work focuses on designing specific losses without estimating transition matrices ( Natarajan et al. , 2013 ; Xu et al. , 2019 ; Liu & Guo , 2020 ; Wei & Liu , 2021 ) . However , these works assume the label noise is instance-independent which limits their extension . Another approach is sample selection ( Jiang et al. , 2017 ; Han et al. , 2018 ; Yu et al. , 2019 ; Northcutt et al. , 2019 ; Yao et al. , 2020a ; Wei et al. , 2020 ; Zhang et al. , 2020a ) , which selects the “ small loss ” examples as clean ones . However , we find this approach only works well on the instance-independent label noise . Approaches such as label correction ( Veit et al. , 2017 ; Li et al. , 2017 ; Han et al. , 2019 ) or semi-supervised learning ( Li et al. , 2020 ; Nguyen et al. , 2019 ) also lack guarantees for the instancebased label noise . 2 CORES2 : CONFIDENCE REGULARIZED SAMPLE SIEVE Consider a classification problem on a set of N training examples denoted by D : = { ( xn , yn ) } n∈ [ N ] , where [ N ] : = { 1 , 2 , · · · , N } is the set of example indices . Examples ( xn , yn ) are drawn according to random variables ( X , Y ) ∈ X × Y from a joint distribution D. Let DX and DY be the marginal distributions of X and Y . The classification task aims to identify a classifier f : X → Y that maps X to Y accurately . One common approach is minimizing the empirical risk using DNNs with respect to the cross-entropy loss defined as ` ( f ( x ) , y ) = − ln ( fx [ y ] ) , y ∈ [ K ] , where fx [ y ] denotes the y-th component of f ( x ) andK is the number of classes . In real-world applications , such as human-annotated images ( Krizhevsky et al. , 2012 ; Zhang et al. , 2017 ) and medical diagnosis ( Agarwal et al. , 2016 ) , the learner can only observe a set of noisy labels . For instance , human annotators may wrongly label some images containing cats as ones that contain dogs accidentally or irresponsibly . The label noise of each instance is characterized by a noise transition matrix T ( X ) , where each element Tij ( X ) : = P ( Ỹ = j|Y = i , X ) . The corresponding noisy dataset2 and distribution are denoted by D̃ : = { ( xn , ỹn ) } n∈ [ N ] and D̃ . Let 1 ( · ) be the indicator function taking 2In this paper , the noisy dataset refers to a dataset with noisy examples . A noisy example is either a clean example ( whose label is true ) or a corrupted example ( whose label is wrong ) . value 1 when the specified condition is satisfied and 0 otherwise . Similar to the goals in surrogate loss ( Natarajan et al. , 2013 ) , LDMI ( Xu et al. , 2019 ) and peer loss ( Liu & Guo , 2020 ) , we aim to learn a classifier f from the noisy distribution D̃ which also minimizes P ( f ( X ) 6= Y ) , ( X , Y ) ∼ D. Beyond their results , we attempt to propose a theoretically sound approach addressing a general instance-based noise regime without knowing or estimating noise rates . 2.1 CONFIDENCE REGULARIZATION . In this section , we present a new confidence regularizer ( CR ) . Our design of the CR is mainly motivated by a recently proposed robust loss function called peer loss ( Liu & Guo , 2020 ) . For each example ( xn , ỹn ) , peer loss has the following form : ` PL ( f ( xn ) , ỹn ) : = ` ( f ( xn ) , ỹn ) − ` ( f ( xn1 ) , ỹn2 ) , where ( xn1 , ỹn1 ) and ( xn2 , ỹn2 ) are two randomly sampled and paired peer examples ( with replacement ) for n. Let Xn1 and Ỹn2 be the corresponding random variables . Note Xn1 , Ỹn2 are two independent and uniform random variables being each xn′ , n′ ∈ [ N ] and ỹn′ , n′ ∈ [ N ] with probability 1N respectively : P ( Xn1 = xn′ |D̃ ) = P ( Ỹn2 = yn′ |D̃ ) = 1 N , ∀n ′ ∈ [ N ] . Let DỸ |D̃ be the distribution of Ỹn2 given dataset D̃ . Peer loss then has the following equivalent form in expectation : 1 N ∑ n∈ [ N ] EXn1 , Ỹn2 |D̃ [ ` ( f ( xn ) , ỹn ) − ` ( f ( Xn1 ) , Ỹn2 ) ] = 1 N ∑ n∈ [ N ] [ ` ( f ( xn ) , ỹn ) − ∑ n′∈ [ N ] P ( Xn1 = xn′ |D̃ ) EDỸ |D̃ [ ` ( f ( xn′ ) , Ỹ ) ] ] = 1 N ∑ n∈ [ N ] [ ` ( f ( xn ) , ỹn ) − ED Ỹ |D̃ [ ` ( f ( xn ) , Ỹ ) ] ] . This result characterizes a new loss denoted by ` CA : ` CA ( f ( xn ) , ỹn ) : = ` ( f ( xn ) , ỹn ) − ED Ỹ |D̃ [ ` ( f ( xn ) , Ỹ ) ] . ( 1 ) Though not studied rigorously by Liu & Guo ( 2020 ) , we show , under conditions3 , ` CA defined in Eqn . ( 1 ) encourages confident predictions4 from f by analyzing the gradients : Theorem 1 . For ` CA ( · ) , solutions satisfying fxn [ i ] > 0 , ∀i ∈ [ K ] are not locally optimal at ( xn , ỹn ) . See Appendix B.2 for the proof . Particularly , in binary cases , we have constraint f ( xn ) [ 0 ] + f ( xn ) [ 1 ] = 1 . Following Theorem 1 , we know minimizing ` CA ( f ( xn ) , ỹn ) w.r.t f under this constraint leads to either f ( xn ) [ 0 ] → 1 or f ( xn ) [ 1 ] → 1 , indicating confident predictions . Therefore , the addition of term −ED Ỹ |D̃ [ ` ( f ( xn ) , Ỹ ) ] helps improve the confidence of the learned classifier . Inspired by the above observation , we define the following confidence regularizer : Confidence Regularizer : ` CR ( f ( xn ) ) : = −β · ED Ỹ |D̃ [ ` ( f ( xn ) , Ỹ ) ] , where β is positive and ` ( · ) refers to the CE loss . The prior probability P ( Ỹ |D̃ ) is counted directly from the noisy dataset . In the remaining of this paper , ` ( · ) indicates the CE loss by default . Why are confident predictions important ? Intuitively , when model fits to the label noise , its predictions often become less confident , since the noise usually corrupts the signal encoded in the clean data . From this perspective , encouraging confident predictions plays against fitting to label noise . Compared to instance-independent noise , the difficulties in estimating the instance-dependent noise rates largely prevent us from applying existing techniques . In addition , as shown by Manwani & Sastry ( 2013 ) , the 0-1 loss function is more robust to instance-based noise but hard to optimize with . To a certain degree , pushing confident predictions results in a differentiable loss function that approximates the 0-1 loss , and therefore restores the robustness property . Besides , as observed by Chatterjee ( 2020 ) and Zielinski et al . ( 2020 ) , gradients from similar examples would reinforce each other . When the overall label information is dominantly informative that Tii ( X ) > Tij ( X ) , DNNs 3Detailed conditions for Theorem 1 are specified at the end of our main contents . 4Our observation can also help partially explain the robustness property of peer loss ( Liu & Guo , 2020 ) . will receive more correct information statistically . Encouraging confident predictions would discourage the memorization of the noisy examples ( makes it hard for noisy labels to reduce the confidence of predictions ) , and therefore further facilitate DNNs to learn the ( clean ) dominant information . ` CR is NOT the entropy regularization Entropy regularization ( ER ) is a popular choice for improving confidence of the trained classifiers in the literature ( Tanaka et al. , 2018 ; Yi & Wu , 2019 ) . Given a particular prediction probability p for a class , the ER term is based on the function −p ln p , while our ` CR is built on ln p. Later we show ` CR offers us favorable theoretical guarantees for training with instance-dependent label noise , while ER does not . In Appendix C.1 , we present both theoretical and experimental evidences that ` CR serves as a better regularizer compared to ER .
The authors of the paper propose a new method, the CORES (COnfidence REgularized Sample Sieve), to tackle the important problem of learning under instance dependent label noise. The proposed method, in essence, involves the use of a confidence regularization term that encourages more confident predictions and a sieving process to remove the samples with large losses. Theoretical justification and empirical experiments were conducted to demonstrate the effectiveness of the proposed method.
SP:b4e5b4a3546fdec14a958bbe0d387bce946396b0
Cross-Modal Retrieval Augmentation for Multi-Modal Classification
1 INTRODUCTION . Neural networks augmented with non-parametric retrieval components have recently shown impressive results in NLP ( Khandelwal et al. , 2019 ; Guu et al. , 2020 ; Lewis et al. , 2020 ; Izacard & Grave , 2020 ) . In this work , we train a state-of-the-art image-caption alignment model and utilize it in various retrieval-augmented multi-modal transformer architectures , achieving state-of-the-art performance on visual question answering ( VQA ) significant improvement over the baselines , including the winner of the VQA 2.0 2020 challenge . Retrieval components are promising because they allow for easy revision and expansion of their memory , as compared to their parametric , pre-training counterparts . They provide more interpretability , as well as direct factual consistency with trusted knowledge sources . In the multi-modal setting , retrieval augmentation allows for leveraging the strengths of text-based models—as evidenced by the strong performance of BERT-based models in vision-and-language ( Lu et al. , 2019 ; Li et al. , 2019b ; Kiela et al. , 2019 ) —via cross-modal translation from images to text . Being able to seamlessly “ hot swap ” knowledge sources without the need for re-training the model affords a unique scalability not typically seen in the traditional deep learning literature . Nearest neighbor methods are known to be strong baselines in the vision and language domain ( Devlin et al. , 2015 ) . We introduce a simple yet effective novel dense cross-modal alignment architecture called DXR ( Dense X-modal Retriever ) . DXR achieves state-of-the-art performance on both COCO ( Chen et al. , 2015 ) and Flickr30k ( Young et al. , 2014 ) image-caption retrieval , with respect to similar methods . We subsequently use DXR to augment several multi-modal transformer architectures with a retrieval component . We show that retrieval augmentation yields impressive results for a variety of well-known multi-modal transformer architectures , ranging from VisualBERT ( Li et al. , 2019b ) and ViLBERT ( Lu et al. , 2019 ) —which use bounding-box features—to Movie+MCAN ( Jiang et al. , 2020 ) —which uses grid features . We name our overall method XTRA , for X-modal Transformer Retrieval Augmentation . Specifically , our contributions are as follows : • We introduce a novel image-caption retrieval architecture , DXR , that achieves state-of-theart performance on COCO and Flickr30k , with respect to similar methods . • We introduce a new retrieval-augmented multi-modal transformer architecture , XTRA , that achieves state-of-the-art performance significant improvement on VQA over the baselines . To our knowledge , this is the first work to showcase the promise of hybrid parametric and non-parametric models for the vision and language domain . • We conduct extensive experiments to shed light on this novel approach . We explore different datasets for training the alignment model , as well as the effect of in-domain versus out-of-domain retrieval indices , the index size and inference time applications . Our experiments show that our proposed method significantly improves over a variety of strong multi-modal baselines , and demonstrates superior results over pre-training . 2 RELATED WORK . Cross-Modal Retrieval Prior work in cross-modal retrieval can be divided into two primary categories : ( i ) methods that use grid-features and/or vector representations of the embedding space , and ( ii ) methods that use detection features , sequence representations , or share information between the two modalities for computing the similarity metric . The first category consists of methods such as RRF ( Liu et al. , 2017 ) and DPC ( Zheng et al. , 2017 ) which use two convolutional network branches for image and text . CMPM by Zhang & Lu ( 2018 ) introduced a pre-trained image backbone with a Bi-directional LSTM to learn image and text embeddings . The most relevant work in this category is VSE++ ( Faghri et al. , 2017 ) , which focuses on hard negative mining and ranking loss . The second category generally exploits the use of detection features , which enforces an additional complexity . Methods such as TERN ( Messina et al. , 2020b ) , TERAN ( Messina et al. , 2020a ) , SAEM ( Wu et al. , 2019 ) and MMCA ( Wei et al. , 2020 ) , use transformer modules to obtain modality-specific embeddings . TERAN , as well as SCAN ( Lee et al. , 2018 ) , utilize sequence similarities . SCO ( Huang et al. , 2018 ) and VSRN ( Li et al. , 2019a ) learn , in addition to image-text alignment , to generate the caption from the image embedding . MMCA , as well as CAMP ( Wang et al. , 2019 ) , fuses image and text information to obtain the final embeddings . Other methods , such as Unicoder-VL ( Li et al. , 2020a ) , Oscar ( Li et al. , 2020b ) and UNITER ( Chen et al. , 2020 ) learn to align between image and text by using positive and negative tuples of images and captions . While these models perform well , they suffer from high computational complexity as we discuss in Sec . 3.4 External Knowledge Source Methods The use of an external knowledge source ( KS ) has gained much attention in the field of natural language processing ( NLP ) , such as the work of Verga et al . ( 2020 ) . Our work is inspired by that of Lewis et al . ( 2020 ) , which introduced RAG , a generic approach for a variety of downstream NLP tasks , that uses a learned retriever ( DPR by Karpukhin et al . ( 2020 ) ) to augment the inputs by marginalizing across several retrieved phrases retrieved from Wikipedia . In the multi-modal domain , previous efforts have focused on building different types of KS , such as the work of Zhu et al . ( 2014 ) , Chen et al . ( 2013 ) , Divvala et al . ( 2014 ) , Sadeghi et al . ( 2015 ) and Zhu et al . ( 2015 ) , which use web information for the construction of the KS . Methods that use an external KS for a downstream task use a structured KS , such as the work of Narasimhan et al . ( 2018 ) , Narasimhan & Schwing ( 2018 ) , Wang et al . ( 2015 ) Wang et al . ( 2018 ) and Zhu et al . ( 2017 ) . Zhu et al . ( 2017 ) introduced an iterative method for VQA tasks . Marino et al . ( 2019 ) introduced OK-VQA , a novel VQA dataset that requires the use of an external KS . Fan et al . ( 2020 ) applied a KS to multi-modal dialogue . In our work , we focus on a more natural KS , such as images and captions , which better reflect the data generated in newspapers and social media . Multi-modal Classification In this work , we investigate the potential advantages of using an external KS for the popular and challenging VQA domain , a multi-modal classification task . Current methods for VQA use pre-training on different datasets in order to gain better performance . In our experiments , we show performance for common methods such as VisualBERT ( Li et al. , 2019b ) , which concatenates the text and image modalities as an input to a pre-trained BERT ( Devlin et al. , 2018 ) model . ViLBERT ( Lu et al. , 2019 ) fuses text and image modalities using co-attentional transformer layers . Other methods such as Pythia ( Jiang et al. , 2018 ) , VLBERT ( Su et al. , 2019 ) and MMBT ( Kiela et al. , 2019 ) can benefit from our method , as well as more recent work such as Oscar ( Li et al. , 2020b ) and UNITER ( Chen et al. , 2020 ) , which use the alignment task for pretraining their models . The currently SOTA In this paper , we choose to show our performance on the two common VisualBERT and ViLBERT models , and the winner of the VQA 2.0 2020 challange , Movie+MCAN ( Jiang et al. , 2020 ) , which uses grid features instead of detection features , a modulated convolutional bottleneck for the image backbone , and MCAN ( Yu et al. , 2019 ) for fusion . A similar method was introduced by Nguyen et al . ( 2020 ) . Our method is also applicable to methods such as Pythia ( Jiang et al. , 2018 ) and MMBT ( Kiela et al. , 2019 ) . 3 METHOD . Our methodology is composed of two disjoint parts : ( i ) for a given external knowledge source K , consisting of m modalities , we train a model ( i.e. , the Retriever ) to align between the different modalities . ( ii ) Given a knowledge source K and an alignment model , we train a downstream model ( i.e. , the Reader ) by augmenting its inputs with extra data from K . 3.1 CROSS-MODAL ALIGNMENT . LetK be a knowledge source consisting ofmmodalities , where each sample si = ( s0i , . . . , smi ) ∈ K is a tuple of m elements , corresponding to different modalities . Our alignment model encompasses m encoders Em , each composed of a feature-extraction module Fm , projection layer Pm , shared Transformer encoding layer T with attention pooling , and a normalization layer N : Em ( x ) = N ( T ( Pm ( Fm ( x ) ) ) ) ( 1 ) From this point , we will consider the two-modality case of images and captions as illustrated in Fig . 1 . For text and image feature extractors , F1 andF2 , we use a pre-trained BERT masked language model Devlin et al . ( 2018 ) , and a pre-trained ResNet152 CNN backbone on ImageNet , respectively . The images are represented with convolutional grid features , chosen for robustness and speed , and these are flattened across the spatial dimension . The projection layers Pm project each modality to a constant dimension d. The projected sequences are then forwarded to a shared Transformerencoding layer , and aggregated by an attention pooling layer , resulting in a vector representation for each modality . Finally , we normalize each vector using an L2 normalizer , projecting all embeddings to the unit-sphere . Following Faghri et al . ( 2017 ) , we only normalize the text embeddings because of image-caption imbalance ( see Sec . 4.1 ) . We train our dense cross-modal retriever ( DXR ) using a contrastive loss , specifically using an inbatch hinge penalty with hard negatives ( Faghri et al. , 2017 ) . Given a batch , consisting of b samples , s1 . . . sb , for each sample si , let s1i and s 2 i be the positive pairs and s 1 i and s 2 j 6=i the negative pairs . We compute the pair-wise similarity between the two modalities , using a dot product , denoted by π ( s1i , s 2 j ) = 〈s1i , s2j 〉 . The hard-negative in-batch hinge loss is then defined as : s2i ′ = max j 6=i π ( s1i , s 2 j ) ( 2 ) s 1 i ′ = max j 6=i π ( s1j , s 2 i ) ( 3 ) Lhard = ∑ i [ α+ π ( s1i , s 2 i ′ ) − π ( s1i , s2i ) ] + ∑ i [ α+ π ( s1i ′ , s2i ) − π ( s1i , s2i ) ] ( 4 ) where s1i ′ and s2i ′ are the hardest samples inside the batch , and α is the margin constant .
This paper explores a new direction, to utilize the searched results (image caption pair) to improve downstream multimodal learning tasks. They first pre-trained a cross-modal model using the contrastive learning on the image caption dataset. Then they use the pre-trained model to search the relevant terms for image or text input, and augment the searched results as the input for the downstream multi-modal tasks.
SP:fe52a638fdb309b8fcb1232b0f23e08c96965721
Cross-Modal Retrieval Augmentation for Multi-Modal Classification
1 INTRODUCTION . Neural networks augmented with non-parametric retrieval components have recently shown impressive results in NLP ( Khandelwal et al. , 2019 ; Guu et al. , 2020 ; Lewis et al. , 2020 ; Izacard & Grave , 2020 ) . In this work , we train a state-of-the-art image-caption alignment model and utilize it in various retrieval-augmented multi-modal transformer architectures , achieving state-of-the-art performance on visual question answering ( VQA ) significant improvement over the baselines , including the winner of the VQA 2.0 2020 challenge . Retrieval components are promising because they allow for easy revision and expansion of their memory , as compared to their parametric , pre-training counterparts . They provide more interpretability , as well as direct factual consistency with trusted knowledge sources . In the multi-modal setting , retrieval augmentation allows for leveraging the strengths of text-based models—as evidenced by the strong performance of BERT-based models in vision-and-language ( Lu et al. , 2019 ; Li et al. , 2019b ; Kiela et al. , 2019 ) —via cross-modal translation from images to text . Being able to seamlessly “ hot swap ” knowledge sources without the need for re-training the model affords a unique scalability not typically seen in the traditional deep learning literature . Nearest neighbor methods are known to be strong baselines in the vision and language domain ( Devlin et al. , 2015 ) . We introduce a simple yet effective novel dense cross-modal alignment architecture called DXR ( Dense X-modal Retriever ) . DXR achieves state-of-the-art performance on both COCO ( Chen et al. , 2015 ) and Flickr30k ( Young et al. , 2014 ) image-caption retrieval , with respect to similar methods . We subsequently use DXR to augment several multi-modal transformer architectures with a retrieval component . We show that retrieval augmentation yields impressive results for a variety of well-known multi-modal transformer architectures , ranging from VisualBERT ( Li et al. , 2019b ) and ViLBERT ( Lu et al. , 2019 ) —which use bounding-box features—to Movie+MCAN ( Jiang et al. , 2020 ) —which uses grid features . We name our overall method XTRA , for X-modal Transformer Retrieval Augmentation . Specifically , our contributions are as follows : • We introduce a novel image-caption retrieval architecture , DXR , that achieves state-of-theart performance on COCO and Flickr30k , with respect to similar methods . • We introduce a new retrieval-augmented multi-modal transformer architecture , XTRA , that achieves state-of-the-art performance significant improvement on VQA over the baselines . To our knowledge , this is the first work to showcase the promise of hybrid parametric and non-parametric models for the vision and language domain . • We conduct extensive experiments to shed light on this novel approach . We explore different datasets for training the alignment model , as well as the effect of in-domain versus out-of-domain retrieval indices , the index size and inference time applications . Our experiments show that our proposed method significantly improves over a variety of strong multi-modal baselines , and demonstrates superior results over pre-training . 2 RELATED WORK . Cross-Modal Retrieval Prior work in cross-modal retrieval can be divided into two primary categories : ( i ) methods that use grid-features and/or vector representations of the embedding space , and ( ii ) methods that use detection features , sequence representations , or share information between the two modalities for computing the similarity metric . The first category consists of methods such as RRF ( Liu et al. , 2017 ) and DPC ( Zheng et al. , 2017 ) which use two convolutional network branches for image and text . CMPM by Zhang & Lu ( 2018 ) introduced a pre-trained image backbone with a Bi-directional LSTM to learn image and text embeddings . The most relevant work in this category is VSE++ ( Faghri et al. , 2017 ) , which focuses on hard negative mining and ranking loss . The second category generally exploits the use of detection features , which enforces an additional complexity . Methods such as TERN ( Messina et al. , 2020b ) , TERAN ( Messina et al. , 2020a ) , SAEM ( Wu et al. , 2019 ) and MMCA ( Wei et al. , 2020 ) , use transformer modules to obtain modality-specific embeddings . TERAN , as well as SCAN ( Lee et al. , 2018 ) , utilize sequence similarities . SCO ( Huang et al. , 2018 ) and VSRN ( Li et al. , 2019a ) learn , in addition to image-text alignment , to generate the caption from the image embedding . MMCA , as well as CAMP ( Wang et al. , 2019 ) , fuses image and text information to obtain the final embeddings . Other methods , such as Unicoder-VL ( Li et al. , 2020a ) , Oscar ( Li et al. , 2020b ) and UNITER ( Chen et al. , 2020 ) learn to align between image and text by using positive and negative tuples of images and captions . While these models perform well , they suffer from high computational complexity as we discuss in Sec . 3.4 External Knowledge Source Methods The use of an external knowledge source ( KS ) has gained much attention in the field of natural language processing ( NLP ) , such as the work of Verga et al . ( 2020 ) . Our work is inspired by that of Lewis et al . ( 2020 ) , which introduced RAG , a generic approach for a variety of downstream NLP tasks , that uses a learned retriever ( DPR by Karpukhin et al . ( 2020 ) ) to augment the inputs by marginalizing across several retrieved phrases retrieved from Wikipedia . In the multi-modal domain , previous efforts have focused on building different types of KS , such as the work of Zhu et al . ( 2014 ) , Chen et al . ( 2013 ) , Divvala et al . ( 2014 ) , Sadeghi et al . ( 2015 ) and Zhu et al . ( 2015 ) , which use web information for the construction of the KS . Methods that use an external KS for a downstream task use a structured KS , such as the work of Narasimhan et al . ( 2018 ) , Narasimhan & Schwing ( 2018 ) , Wang et al . ( 2015 ) Wang et al . ( 2018 ) and Zhu et al . ( 2017 ) . Zhu et al . ( 2017 ) introduced an iterative method for VQA tasks . Marino et al . ( 2019 ) introduced OK-VQA , a novel VQA dataset that requires the use of an external KS . Fan et al . ( 2020 ) applied a KS to multi-modal dialogue . In our work , we focus on a more natural KS , such as images and captions , which better reflect the data generated in newspapers and social media . Multi-modal Classification In this work , we investigate the potential advantages of using an external KS for the popular and challenging VQA domain , a multi-modal classification task . Current methods for VQA use pre-training on different datasets in order to gain better performance . In our experiments , we show performance for common methods such as VisualBERT ( Li et al. , 2019b ) , which concatenates the text and image modalities as an input to a pre-trained BERT ( Devlin et al. , 2018 ) model . ViLBERT ( Lu et al. , 2019 ) fuses text and image modalities using co-attentional transformer layers . Other methods such as Pythia ( Jiang et al. , 2018 ) , VLBERT ( Su et al. , 2019 ) and MMBT ( Kiela et al. , 2019 ) can benefit from our method , as well as more recent work such as Oscar ( Li et al. , 2020b ) and UNITER ( Chen et al. , 2020 ) , which use the alignment task for pretraining their models . The currently SOTA In this paper , we choose to show our performance on the two common VisualBERT and ViLBERT models , and the winner of the VQA 2.0 2020 challange , Movie+MCAN ( Jiang et al. , 2020 ) , which uses grid features instead of detection features , a modulated convolutional bottleneck for the image backbone , and MCAN ( Yu et al. , 2019 ) for fusion . A similar method was introduced by Nguyen et al . ( 2020 ) . Our method is also applicable to methods such as Pythia ( Jiang et al. , 2018 ) and MMBT ( Kiela et al. , 2019 ) . 3 METHOD . Our methodology is composed of two disjoint parts : ( i ) for a given external knowledge source K , consisting of m modalities , we train a model ( i.e. , the Retriever ) to align between the different modalities . ( ii ) Given a knowledge source K and an alignment model , we train a downstream model ( i.e. , the Reader ) by augmenting its inputs with extra data from K . 3.1 CROSS-MODAL ALIGNMENT . LetK be a knowledge source consisting ofmmodalities , where each sample si = ( s0i , . . . , smi ) ∈ K is a tuple of m elements , corresponding to different modalities . Our alignment model encompasses m encoders Em , each composed of a feature-extraction module Fm , projection layer Pm , shared Transformer encoding layer T with attention pooling , and a normalization layer N : Em ( x ) = N ( T ( Pm ( Fm ( x ) ) ) ) ( 1 ) From this point , we will consider the two-modality case of images and captions as illustrated in Fig . 1 . For text and image feature extractors , F1 andF2 , we use a pre-trained BERT masked language model Devlin et al . ( 2018 ) , and a pre-trained ResNet152 CNN backbone on ImageNet , respectively . The images are represented with convolutional grid features , chosen for robustness and speed , and these are flattened across the spatial dimension . The projection layers Pm project each modality to a constant dimension d. The projected sequences are then forwarded to a shared Transformerencoding layer , and aggregated by an attention pooling layer , resulting in a vector representation for each modality . Finally , we normalize each vector using an L2 normalizer , projecting all embeddings to the unit-sphere . Following Faghri et al . ( 2017 ) , we only normalize the text embeddings because of image-caption imbalance ( see Sec . 4.1 ) . We train our dense cross-modal retriever ( DXR ) using a contrastive loss , specifically using an inbatch hinge penalty with hard negatives ( Faghri et al. , 2017 ) . Given a batch , consisting of b samples , s1 . . . sb , for each sample si , let s1i and s 2 i be the positive pairs and s 1 i and s 2 j 6=i the negative pairs . We compute the pair-wise similarity between the two modalities , using a dot product , denoted by π ( s1i , s 2 j ) = 〈s1i , s2j 〉 . The hard-negative in-batch hinge loss is then defined as : s2i ′ = max j 6=i π ( s1i , s 2 j ) ( 2 ) s 1 i ′ = max j 6=i π ( s1j , s 2 i ) ( 3 ) Lhard = ∑ i [ α+ π ( s1i , s 2 i ′ ) − π ( s1i , s2i ) ] + ∑ i [ α+ π ( s1i ′ , s2i ) − π ( s1i , s2i ) ] ( 4 ) where s1i ′ and s2i ′ are the hardest samples inside the batch , and α is the margin constant .
This paper proposed a cross-modal retrieval augmentation for the multi-modal classification task (VQA). The authors first introduce a transformer-based image caption retrieval architecture that achieves decent performance. Then, the authors proposed to use the retrieval model to retrieve relevant visual and textual information as augmentation. The proposed method experiment on 3 existing method (Visual Bert, ViLBERT, and Movie + MCAN) and show good improvements over the baseline model.
SP:fe52a638fdb309b8fcb1232b0f23e08c96965721
XMixup: Efficient Transfer Learning with Auxiliary Samples by Cross-Domain Mixup
1 INTRODUCTION . Performance of deep learning algorithms in real-world applications is often limited by the size of training datasets . Training a deep neural network ( DNN ) model with a small number of training samples usually leads to the over-fitting issue with poor generalization performance . A common yet effective solution is to train DNN models under transfer learning Pan et al . ( 2010 ) settings using large source datasets . The knowledge transfer from the source domain helps DNNs learn better features and acquire higher generalization performance for the pattern recognition in the target domain Donahue et al . ( 2014 ) ; Yim et al . ( 2017 ) . Backgrounds . For example , the paradigm Donahue et al . ( 2014 ) proposes to first train a DNN model using the large ( and possibly irrelevant ) source dataset ( e.g . ImageNet ) , then uses the weights of the pre-trained model as the starting point of optimization and fine-tunes the model using the target dataset . In this way , blessed by the power of large source datasets , the fine-tuned model is usually capable of handling the target task with better generalization performance . Furthermore , authors in Yim et al . ( 2017 ) ; Li et al . ( 2018 ; 2019 ) propose transfer learning algorithms that regularize the training procedure using the pre-trained models , so as to constrain the divergence of the weights and feature maps between the pre-trained and fine-tuned DNN models . Later , the work Chen et al . ( 2019 ) ; Wan et al . ( 2019 ) introduces new algorithms that prevent the regularization from the hurts to transfer learning , where Chen et al . ( 2019 ) proposes to truncate the tail spectrum of the batch of gradients while Wan et al . ( 2019 ) proposes to truncate the ill-posed direction of the aggregated gradients . In addition to the aforementioned strategies , a great number of methods have been proposed to transfer knowledge from the multi-task learning perspectives , such as Ge & Yu ( 2017b ) ; Cui et al . ( 2018 ) . More specifically , Seq-Train Cui et al . ( 2018 ) proposes a two phase approach , where the algorithm first picks up auxiliary samples from the source datasets with respect to the target task , then pre-train a model with the auxiliary samples and fine-tune the model using the target dataset . Moreover , Co-Train Ge & Yu ( 2017b ) adopts a multi-task co-training approach to simultaneously train a shared backbone network using both source and target datasets and their corresponding separate Fully-Connected ( FC ) layers . While all above algorithms enable knowledge transfer from source datasets to target tasks , they unfortunately perform poorly , sometimes , due to the critical technical issues as follows . • Catastrophic Forgetting and Negative Transfer . Most transfer learning algorithms Donahue et al . ( 2014 ) ; Yim et al . ( 2017 ) ; Li et al . ( 2018 ; 2019 ) consist of two steps – pre-training and fine-tuning . Given the features that have been learned in the pre-trained models , either forgetting some good features during the fine-tuning process ( catastrophic forgetting ) Chen et al . ( 2019 ) or preserving the inappropriate features/filters to reject the knowledge from the target domain ( negative transfer ) Li et al . ( 2019 ) ; Wan et al . ( 2019 ) would hurt the performance of transfer learning . In this way , there might need a way to make compromises between the features learned from both source/target domains during the fine-tuning process , where multi-task learning with Seq-Train Cui et al . ( 2018 ) and Co-Train Ge & Yu ( 2017b ) might suggest feasible solutions to well-balance the knowledge learned from the source/target domains , through fine-tuning the model with a selected set of auxiliary samples ( rather than the whole source dataset ) Cui et al . ( 2018 ) or alternatively learning the features from both domains during fine-tuning Ge & Yu ( 2017b ) . • Gradient Complexity for Seq-Train and Co-Train . The deep transfer learning algorithms based on multi-task learning are ineffective . Though the pre-trained models based on some key datasets , such as ImageNet , are ubiquitously available for free , multi-tasking algorithms usually need additional steps for knowledge transfer . Prior to the fine-tuning procedure based on the target dataset , Seq-Train requires an additional step to select auxiliary samples and “ mid-tunes ” the pre-trained model using the selected auxiliary samples Cui et al . ( 2018 ) . Furthermore , Co-Train Ge & Yu ( 2017b ) requests additional cost for backpropagation in-situ as the two dataset combined . In this way , there might need a deep transfer learning algorithm that does not require explicit “ mid-tuning ” procedure or additional backpropagation to learn from the source dataset . Our Work . With both technical issues in mind , we aim to study efficient and effective deep transfer learning algorithms with low computational complexity from the multi-task learning perspectives . We propose XMixup , namely Cross-domain Mixup , which is a novel deep transfer learning algorithm enabling knowledge transfer from source to target domains through the low-cost Mixup Zhang et al . ( 2018b ) . More specifically , given the source and target datasets for image classification tasks , XMixup runs deep transfer learning in two steps – ( 1 ) Auxiliary sample selection : XMixup pairs every class from the target dataset to a dedicated class in the source dataset , where the samples in the source class are considered as the auxiliary samples for the target class ; then ( 2 ) Mixup with auxiliary samples and Fine-tuning : XMixup combines the samples from the paired classes of the two domains randomly using mixup strategy Zhang et al . ( 2018a ) , and performs fine-tuning process over the mixup data . To the best of our knowledge , this work has made three sets of contributions as follows . 1 . We study the problems of cross-domain deep transfer learning for DNN classifiers from the multitask learning perspective , where the knowledge transfer from the source to the target tasks is considered as a co-training procedure of the shared DNN layers using the target dataset and auxiliary samples Ge & Yu ( 2017b ) ; Cui et al . ( 2018 ) . We review the existing solutions Donahue et al . ( 2014 ) ; Yim et al . ( 2017 ) ; Li et al . ( 2018 ; 2019 ) , summarize the technical limitations of these algorithms , and particularly take care of the issues in catastrophic forgetting Chen et al . ( 2019 ) , negative transfer Wan et al . ( 2019 ) , and computational complexity . 2 . In terms of methodologies , we extend the use of Mixup Zhang et al . ( 2018b ) to the applications of cross-domain knowledge transfer , where both source and target datasets own different sets of classes and the aim of transfer learning is to adapt classes in the target domain . While vanilla mixup augments the training data with rich features and regularizes the stochastic training beyond empirical risk minimization ( ERM ) , the proposed algorithm XMixup in this paper uses mixup to fuse the samples from source and target domains . In this way , the catastrophic forgetting issue could be solved in part , as the model keeps learning from both domains , but with lower cost compared to Chen et al . ( 2019 ) . To control the effects of knowledge transfer , XMixup also offers a tuning parameter to make trade-off between the two domains in the mixup of samples Zhang et al . ( 2018b ) . 3 . We carry out extensive experiments using a wide range of source and target datasets , and compare the results of XMixup with a number of baseline algorithms , including fine-tuning with weight decay ( L2 ) Donahue et al . ( 2014 ) , fine-tuning with L2-regularization on the starting point ( L2-SP ) Li et al . ( 2018 ) , Batch Singular Shrinkage ( BSS ) Chen et al . ( 2019 ) , Seq-Train Cui et al . ( 2018 ) , and Co-Train Ge & Yu ( 2017b ) . The experiment results showed that XMixup can outperform all these algorithms with significant improvement in both efficiency and effectiveness . Organizations of the Paper The rest of this paper is organized as follows . In Section 2 , we review the relations between our work to the existing algorithms , where the most relevant studies are discussed . We later present the algorithm design in Section 3 , and the experiments with overall comparison results in Section 4 , respectively . We discuss the details about the algorithm with case studies and ablation studies in Section 5 , then conclude the paper in Section 6 . 2 RELATED WORK . The most relevant studies to our algorithm are Donahue et al . ( 2014 ) ; Chen et al . ( 2019 ) ; Cui et al . ( 2018 ) ; Ge & Yu ( 2017b ) ; Zhang et al . ( 2018b ) ; Xu et al . ( 2020 ) . All these algorithms , as well as the proposed XMixup algorithm , start the transfer learning from a pre-trained model , which has been well-trained using the source dataset . However , XMixup makes unique technical contributions in comparisons to these works . Compared to Donahue et al . ( 2014 ) , which fine-tunes the pre-trained model using the target set only and might cause the so-called catastrophic forgetting effects , XMixup proposes to fine-tune the pre-trained model using the mixup data from both domains . Compared Chen et al . ( 2019 ) , which uses the computationally expensive singular value decomposition ( SVD ) on the batch gradients to avoid catastrophic forgetting and negative transfer effects , XMixup employs the low-cost mixup strategies to achieve similar goals . Compared to Cui et al . ( 2018 ) , the proposed algorithm XMixup also adopts a similar procedure ( pairing the classes in source/target domains ) to pick up auxiliary samples from the source domain for knowledge transfer . XMixup however further mixes up the target training set with auxiliary samples and fine-tunes the pre-trained model with the data in an end-to-end manner , rather than using a two-step approach for fine-tuning Cui et al . ( 2018 ) . Compared to Ge & Yu ( 2017b ) , which combines source/target tasks together to fine-tune the shared DNN backbones , the proposed algorithm here mixes up data from the two domains and boosts the performance through a simple fine-tuning process over the mixup data with low computational cost . Finally , we extend the usage of vanilla mixup strategy Zhang et al . ( 2018b ) for the applications of transfer learning , where in terms of methodologies we propose to pair classes of the two domains and perform mixup over the selected auxiliary samples for improved performance . Actually , mixup strategies have been used in Xu et al . ( 2020 ) for unsupervised domain adaption . Since the target task is assumed to share the same set of classes as the source domain in Xu et al . ( 2020 ) , selecting auxiliary samples or pairing the source classes to fit the classes in the target domain is not required . 3 XMIXUP : CROSS-DOMAIN MIXUP FOR DEEP TRANSFER LEARNING . Given the source and target datasets and a pre-trained model ( that has been well-trained using the source dataset ) , XMixup performs deep transfer learning using two steps as follows . Auxiliary Sample Selection Given a source dataset S with m classes and a target training dataset T with n classes , XMixup assumes the source domain is usually with more classes than the target one ( i.e. , m > n ) , and it intends to pair every class in the target training dataset with a unique and dedicated class in the source dataset ( one-to-one pairing from the target to source classes ) . More specifically , given a pre-trained model , XMixup first passes every sample from the two datasets through the pre-trained model and obtains features extracted from the last layer of the feature extractor . Then , XMixup groups the features of the samples according to the ground truth classes in their datasets , and estimates the centroid of the features for every class in both datasets . Such that , for every class c in the source or target dataset , XMixup represents the class as the centroid of the features using the pre-trained model Θpretrain for every sample xi in the class c , i.e. , centroid ( c ) = 1 |c| ∑ ∀xi∈c Φ ( xi , Θpretrain ) , for c ∈ S or c ∈ T. ( 1 ) Given two classes cs and ct in the source and target domains respectively , we consider the similarity between the two classes as the potentials for knowledge transfer , while XMixup measures the similarity between the two classes using the cosine measures between the centroids of the two classes , such that dist ( cs , ct ) = cosine < centroid ( cs ) , centroid ( ct ) > . In this way , the auxiliary sample selection could be reduced to search the optimal transport between the sets of classes of S and T respectively , via the pre-defined distance measure . Hereby XMixup intends to find a one-to-one mapping P∗ : T → S , such that P∗ ← argmin ∀P⊂ ( S×T ) ∩O2O ∑ ∀ct∈T dist ( ct , P ( ct ) ) , ( 2 ) where S × T refers to the Cartesian product of the target and source class sets , O2O refers to the constraint of the one-to-one mapping , P ( ct ) maps the target class to a unique class from the source domain . Note that P∗ refers to the optimal mapping that potentially exists to minimize the overall distances , while XMixup solves the optimization problem using a simple Greedy search Cui et al . ( 2018 ) to pursue a robust solution denoted as Pgreedy in low complexity . Compared to XMixup , the Seq-Train algorithm Cui et al . ( 2018 ) uses Greedy algorithm to pair the source/target classes via the measure Earth Mover ’ s Distance ( EMD ) , which might be inappropriate in our settings of transfer learning . Cross-domain Mixup with Auxiliary Samples and Fine-tuning Given the one-to-one pairing Pgreedy from target to source classes , XMixup carries out the fine-tuning process over the two datasets . In every iteration of fine-tuning , XMixup first picks up a mini-batch of training samples B drawn from the target dataset T ; then for every sample xt in the batch B , the algorithm retrieves the class of xt as xt.class and randomly draws one sample xs from the paired class of xt.class , such that xs i.i.d∼ Pgreedy ( xt.class ) , ∀xi ∈ B . ( 3 ) We consider xs as an auxiliary sample of xt in the current iteration of fine-tuning . XMixup then mixes up the two samples as well as their labels through linear combination with a trade-off parameter λ drawn from the Beta distribution Beta ( α , β ) , such that x = λxt + ( 1− λ ) xs , y = λyt + ( 1− λ ) ys , and λ i.i.d∼ Beta ( α , β ) . ( 4 ) In this way , XMixup augments the original training sample ( xt , yt ) from the target domain using the auxiliary sample ( xs , ys ) from the paired source class , for knowledge transfer purposes . XMixup fine-tunes the pre-trained model Θpretrain using the mixup samples accordingly .
This paper proposes XMixup, a strategy for improving transfer learning in neural networks. Specifically, XMixup consists of mixup applied between target samples and source samples from the class pre-determined to be closest to target sample’s class. Experiments conducting transfer learning from pre-trained ImageNet to 6 smaller image classification datasets demonstrate XMixup to outperform the baseline approaches.
SP:7cef694906438e793f2303852173109b603e0dd5
XMixup: Efficient Transfer Learning with Auxiliary Samples by Cross-Domain Mixup
1 INTRODUCTION . Performance of deep learning algorithms in real-world applications is often limited by the size of training datasets . Training a deep neural network ( DNN ) model with a small number of training samples usually leads to the over-fitting issue with poor generalization performance . A common yet effective solution is to train DNN models under transfer learning Pan et al . ( 2010 ) settings using large source datasets . The knowledge transfer from the source domain helps DNNs learn better features and acquire higher generalization performance for the pattern recognition in the target domain Donahue et al . ( 2014 ) ; Yim et al . ( 2017 ) . Backgrounds . For example , the paradigm Donahue et al . ( 2014 ) proposes to first train a DNN model using the large ( and possibly irrelevant ) source dataset ( e.g . ImageNet ) , then uses the weights of the pre-trained model as the starting point of optimization and fine-tunes the model using the target dataset . In this way , blessed by the power of large source datasets , the fine-tuned model is usually capable of handling the target task with better generalization performance . Furthermore , authors in Yim et al . ( 2017 ) ; Li et al . ( 2018 ; 2019 ) propose transfer learning algorithms that regularize the training procedure using the pre-trained models , so as to constrain the divergence of the weights and feature maps between the pre-trained and fine-tuned DNN models . Later , the work Chen et al . ( 2019 ) ; Wan et al . ( 2019 ) introduces new algorithms that prevent the regularization from the hurts to transfer learning , where Chen et al . ( 2019 ) proposes to truncate the tail spectrum of the batch of gradients while Wan et al . ( 2019 ) proposes to truncate the ill-posed direction of the aggregated gradients . In addition to the aforementioned strategies , a great number of methods have been proposed to transfer knowledge from the multi-task learning perspectives , such as Ge & Yu ( 2017b ) ; Cui et al . ( 2018 ) . More specifically , Seq-Train Cui et al . ( 2018 ) proposes a two phase approach , where the algorithm first picks up auxiliary samples from the source datasets with respect to the target task , then pre-train a model with the auxiliary samples and fine-tune the model using the target dataset . Moreover , Co-Train Ge & Yu ( 2017b ) adopts a multi-task co-training approach to simultaneously train a shared backbone network using both source and target datasets and their corresponding separate Fully-Connected ( FC ) layers . While all above algorithms enable knowledge transfer from source datasets to target tasks , they unfortunately perform poorly , sometimes , due to the critical technical issues as follows . • Catastrophic Forgetting and Negative Transfer . Most transfer learning algorithms Donahue et al . ( 2014 ) ; Yim et al . ( 2017 ) ; Li et al . ( 2018 ; 2019 ) consist of two steps – pre-training and fine-tuning . Given the features that have been learned in the pre-trained models , either forgetting some good features during the fine-tuning process ( catastrophic forgetting ) Chen et al . ( 2019 ) or preserving the inappropriate features/filters to reject the knowledge from the target domain ( negative transfer ) Li et al . ( 2019 ) ; Wan et al . ( 2019 ) would hurt the performance of transfer learning . In this way , there might need a way to make compromises between the features learned from both source/target domains during the fine-tuning process , where multi-task learning with Seq-Train Cui et al . ( 2018 ) and Co-Train Ge & Yu ( 2017b ) might suggest feasible solutions to well-balance the knowledge learned from the source/target domains , through fine-tuning the model with a selected set of auxiliary samples ( rather than the whole source dataset ) Cui et al . ( 2018 ) or alternatively learning the features from both domains during fine-tuning Ge & Yu ( 2017b ) . • Gradient Complexity for Seq-Train and Co-Train . The deep transfer learning algorithms based on multi-task learning are ineffective . Though the pre-trained models based on some key datasets , such as ImageNet , are ubiquitously available for free , multi-tasking algorithms usually need additional steps for knowledge transfer . Prior to the fine-tuning procedure based on the target dataset , Seq-Train requires an additional step to select auxiliary samples and “ mid-tunes ” the pre-trained model using the selected auxiliary samples Cui et al . ( 2018 ) . Furthermore , Co-Train Ge & Yu ( 2017b ) requests additional cost for backpropagation in-situ as the two dataset combined . In this way , there might need a deep transfer learning algorithm that does not require explicit “ mid-tuning ” procedure or additional backpropagation to learn from the source dataset . Our Work . With both technical issues in mind , we aim to study efficient and effective deep transfer learning algorithms with low computational complexity from the multi-task learning perspectives . We propose XMixup , namely Cross-domain Mixup , which is a novel deep transfer learning algorithm enabling knowledge transfer from source to target domains through the low-cost Mixup Zhang et al . ( 2018b ) . More specifically , given the source and target datasets for image classification tasks , XMixup runs deep transfer learning in two steps – ( 1 ) Auxiliary sample selection : XMixup pairs every class from the target dataset to a dedicated class in the source dataset , where the samples in the source class are considered as the auxiliary samples for the target class ; then ( 2 ) Mixup with auxiliary samples and Fine-tuning : XMixup combines the samples from the paired classes of the two domains randomly using mixup strategy Zhang et al . ( 2018a ) , and performs fine-tuning process over the mixup data . To the best of our knowledge , this work has made three sets of contributions as follows . 1 . We study the problems of cross-domain deep transfer learning for DNN classifiers from the multitask learning perspective , where the knowledge transfer from the source to the target tasks is considered as a co-training procedure of the shared DNN layers using the target dataset and auxiliary samples Ge & Yu ( 2017b ) ; Cui et al . ( 2018 ) . We review the existing solutions Donahue et al . ( 2014 ) ; Yim et al . ( 2017 ) ; Li et al . ( 2018 ; 2019 ) , summarize the technical limitations of these algorithms , and particularly take care of the issues in catastrophic forgetting Chen et al . ( 2019 ) , negative transfer Wan et al . ( 2019 ) , and computational complexity . 2 . In terms of methodologies , we extend the use of Mixup Zhang et al . ( 2018b ) to the applications of cross-domain knowledge transfer , where both source and target datasets own different sets of classes and the aim of transfer learning is to adapt classes in the target domain . While vanilla mixup augments the training data with rich features and regularizes the stochastic training beyond empirical risk minimization ( ERM ) , the proposed algorithm XMixup in this paper uses mixup to fuse the samples from source and target domains . In this way , the catastrophic forgetting issue could be solved in part , as the model keeps learning from both domains , but with lower cost compared to Chen et al . ( 2019 ) . To control the effects of knowledge transfer , XMixup also offers a tuning parameter to make trade-off between the two domains in the mixup of samples Zhang et al . ( 2018b ) . 3 . We carry out extensive experiments using a wide range of source and target datasets , and compare the results of XMixup with a number of baseline algorithms , including fine-tuning with weight decay ( L2 ) Donahue et al . ( 2014 ) , fine-tuning with L2-regularization on the starting point ( L2-SP ) Li et al . ( 2018 ) , Batch Singular Shrinkage ( BSS ) Chen et al . ( 2019 ) , Seq-Train Cui et al . ( 2018 ) , and Co-Train Ge & Yu ( 2017b ) . The experiment results showed that XMixup can outperform all these algorithms with significant improvement in both efficiency and effectiveness . Organizations of the Paper The rest of this paper is organized as follows . In Section 2 , we review the relations between our work to the existing algorithms , where the most relevant studies are discussed . We later present the algorithm design in Section 3 , and the experiments with overall comparison results in Section 4 , respectively . We discuss the details about the algorithm with case studies and ablation studies in Section 5 , then conclude the paper in Section 6 . 2 RELATED WORK . The most relevant studies to our algorithm are Donahue et al . ( 2014 ) ; Chen et al . ( 2019 ) ; Cui et al . ( 2018 ) ; Ge & Yu ( 2017b ) ; Zhang et al . ( 2018b ) ; Xu et al . ( 2020 ) . All these algorithms , as well as the proposed XMixup algorithm , start the transfer learning from a pre-trained model , which has been well-trained using the source dataset . However , XMixup makes unique technical contributions in comparisons to these works . Compared to Donahue et al . ( 2014 ) , which fine-tunes the pre-trained model using the target set only and might cause the so-called catastrophic forgetting effects , XMixup proposes to fine-tune the pre-trained model using the mixup data from both domains . Compared Chen et al . ( 2019 ) , which uses the computationally expensive singular value decomposition ( SVD ) on the batch gradients to avoid catastrophic forgetting and negative transfer effects , XMixup employs the low-cost mixup strategies to achieve similar goals . Compared to Cui et al . ( 2018 ) , the proposed algorithm XMixup also adopts a similar procedure ( pairing the classes in source/target domains ) to pick up auxiliary samples from the source domain for knowledge transfer . XMixup however further mixes up the target training set with auxiliary samples and fine-tunes the pre-trained model with the data in an end-to-end manner , rather than using a two-step approach for fine-tuning Cui et al . ( 2018 ) . Compared to Ge & Yu ( 2017b ) , which combines source/target tasks together to fine-tune the shared DNN backbones , the proposed algorithm here mixes up data from the two domains and boosts the performance through a simple fine-tuning process over the mixup data with low computational cost . Finally , we extend the usage of vanilla mixup strategy Zhang et al . ( 2018b ) for the applications of transfer learning , where in terms of methodologies we propose to pair classes of the two domains and perform mixup over the selected auxiliary samples for improved performance . Actually , mixup strategies have been used in Xu et al . ( 2020 ) for unsupervised domain adaption . Since the target task is assumed to share the same set of classes as the source domain in Xu et al . ( 2020 ) , selecting auxiliary samples or pairing the source classes to fit the classes in the target domain is not required . 3 XMIXUP : CROSS-DOMAIN MIXUP FOR DEEP TRANSFER LEARNING . Given the source and target datasets and a pre-trained model ( that has been well-trained using the source dataset ) , XMixup performs deep transfer learning using two steps as follows . Auxiliary Sample Selection Given a source dataset S with m classes and a target training dataset T with n classes , XMixup assumes the source domain is usually with more classes than the target one ( i.e. , m > n ) , and it intends to pair every class in the target training dataset with a unique and dedicated class in the source dataset ( one-to-one pairing from the target to source classes ) . More specifically , given a pre-trained model , XMixup first passes every sample from the two datasets through the pre-trained model and obtains features extracted from the last layer of the feature extractor . Then , XMixup groups the features of the samples according to the ground truth classes in their datasets , and estimates the centroid of the features for every class in both datasets . Such that , for every class c in the source or target dataset , XMixup represents the class as the centroid of the features using the pre-trained model Θpretrain for every sample xi in the class c , i.e. , centroid ( c ) = 1 |c| ∑ ∀xi∈c Φ ( xi , Θpretrain ) , for c ∈ S or c ∈ T. ( 1 ) Given two classes cs and ct in the source and target domains respectively , we consider the similarity between the two classes as the potentials for knowledge transfer , while XMixup measures the similarity between the two classes using the cosine measures between the centroids of the two classes , such that dist ( cs , ct ) = cosine < centroid ( cs ) , centroid ( ct ) > . In this way , the auxiliary sample selection could be reduced to search the optimal transport between the sets of classes of S and T respectively , via the pre-defined distance measure . Hereby XMixup intends to find a one-to-one mapping P∗ : T → S , such that P∗ ← argmin ∀P⊂ ( S×T ) ∩O2O ∑ ∀ct∈T dist ( ct , P ( ct ) ) , ( 2 ) where S × T refers to the Cartesian product of the target and source class sets , O2O refers to the constraint of the one-to-one mapping , P ( ct ) maps the target class to a unique class from the source domain . Note that P∗ refers to the optimal mapping that potentially exists to minimize the overall distances , while XMixup solves the optimization problem using a simple Greedy search Cui et al . ( 2018 ) to pursue a robust solution denoted as Pgreedy in low complexity . Compared to XMixup , the Seq-Train algorithm Cui et al . ( 2018 ) uses Greedy algorithm to pair the source/target classes via the measure Earth Mover ’ s Distance ( EMD ) , which might be inappropriate in our settings of transfer learning . Cross-domain Mixup with Auxiliary Samples and Fine-tuning Given the one-to-one pairing Pgreedy from target to source classes , XMixup carries out the fine-tuning process over the two datasets . In every iteration of fine-tuning , XMixup first picks up a mini-batch of training samples B drawn from the target dataset T ; then for every sample xt in the batch B , the algorithm retrieves the class of xt as xt.class and randomly draws one sample xs from the paired class of xt.class , such that xs i.i.d∼ Pgreedy ( xt.class ) , ∀xi ∈ B . ( 3 ) We consider xs as an auxiliary sample of xt in the current iteration of fine-tuning . XMixup then mixes up the two samples as well as their labels through linear combination with a trade-off parameter λ drawn from the Beta distribution Beta ( α , β ) , such that x = λxt + ( 1− λ ) xs , y = λyt + ( 1− λ ) ys , and λ i.i.d∼ Beta ( α , β ) . ( 4 ) In this way , XMixup augments the original training sample ( xt , yt ) from the target domain using the auxiliary sample ( xs , ys ) from the paired source class , for knowledge transfer purposes . XMixup fine-tunes the pre-trained model Θpretrain using the mixup samples accordingly .
This paper proposes a simple variant for the mixup training mechanism for transfer learning problems: cross-domain mixup (XMixup). The key idea is to mix up the training samples from both domains where the samples are generated by nearest-center assignment in each class. Experiments on several datasets have shown its effectiveness in transfer learning compared to some SOTA methods.
SP:7cef694906438e793f2303852173109b603e0dd5
Autoencoder Image Interpolation by Shaping the Latent Space
Autoencoders represent an effective approach for computing the underlying factors characterizing datasets of different types . The latent representation of autoencoders have been studied in the context of enabling interpolation between data points by decoding convex combinations of latent vectors . This interpolation , however , often leads to artifacts or produces unrealistic results during reconstruction . We argue that these incongruities are due to the structure of the latent space and because such naively interpolated latent vectors deviate from the data manifold . In this paper , we propose a regularization technique that shapes the latent representation to follow a manifold that is consistent with the training images and that drives the manifold to be smooth and locally convex . This regularization not only enables faithful interpolation between data points , as we show herein , but can also be used as a general regularization technique to avoid overfitting or to produce new samples for data augmentation . 1 INTRODUCTION . Given a set of data points , data interpolation or extrapolation aims at predicting novel data points between given samples ( interpolation ) or predicting novel data outside the sample range ( extrapolation ) . Faithful data interpolation between sampled data can be seen as a measure of the generalization capacity of a learning system ( Berthelot et al. , 2018 ) . In the context of computer vision and computer graphics , data interpolation may refer to generating novel views of an object between two given views or predicting in-between animated frames from key frames . Interpolation that produces novel views of a scene requires input such as the geometric and photometric parameters of existing objects , camera parameters and additional scene components , such as lighting and the reflective characteristics of nearby objects . Unfortunately , these characteristics are not always available or are difficult to extract in real-world scenarios . Thus , in such cases , we can apply data-driven interpolation that is deduced based on a sampled dataset drawn from the scene taken under various acquisition parameters . The task of data interpolation is to extract new samples ( possibly continuous ) between known data samples . Clearly , linear interpolation between two images in the input ( image ) domain does not work as it produces a cross-dissolve effect between the intensities of the two images . Adopting the manifold view of data ( Goodfellow et al. , 2016 ; Verma et al. , 2018 ; Bengio et al. , 2013 ) , this task can be seen as sampling new data points along the geodesic path between the given points . The problem is that this manifold is unknown in advance and one has to approximate it from the given data . Alternatively , adopting the probabilistic perspective , interpolation can be viewed as drawing samples from highly probable areas in the data space . One fascinating property of unsupervised learning is the network ’ s ability to reveal the underlying factors controlling a given dataset . Autoencoders ( Doersch , 2016 ; Kingma & Welling , 2013 ; Goodfellow et al. , 2016 ; Kramer , 1991 ; Vincent et al. , 2010 ) represent an effective approach for exposing these factors . Researchers have demonstrated the ability to interpolate between data points by decoding a convex sum of latent vectors ( Shu et al. , 2018 ; Mathieu et al. , 2016 ) ; however , this interpolation often incorporates visible artifacts during reconstruction . To illustrate the problem , consider the following example : A scene is composed of a vertical pole at the center of a flat plane ( Figure 1-left ) . A single light source illuminates the scene and accordingly , the pole projects a shadow onto the plane . The position of the light source can vary along the upper hemisphere . Hence , the underlying parameters controlling the generated scene are ( θ , φ ) , the elevation and azimuth , respectively . The interaction between the light and the pole produces a cast shadow whose direction and length are determined by the light direction . A set of images of this scene is acquired from a fixed viewing position ( from above ) with various lighting directions . Our goal in this example is to train a model that is capable of interpolating between two given images . Figure 1 , top row , depicts a set of interpolated images , between the source image ( left image ) and the target image ( right image ) , where the interpolation is performed in the input domain . As illustrated , the interpolation is not natural as it produces cross-dissolve effects in image intensities . Training a standard autoencoder and applying linear interpolation in its latent space generates images that are much more realistic ( Figure 1 , bottom row ) . Nevertheless , this interpolation is not perfect as visible artifacts occur in the interpolated images . The source of these artifacts can be investigated by closely inspecting the 2D manifold embedded in the latent space . Figure 2 shows two manifolds embedded in latent spaces , one with data embedded in 2D latent space ( left plot ) and one with data embedded in 3D latent space ( 2nd plot from the left ) . In both cases , the manifolds are 2D and are generated using vanilla autoencoders . The grid lines represent the ( θ , φ ) parameterization . It can be seen that the encoders produce non-smooth and non-convex surfaces in 2D as well as in 3D . Thus , linear interpolation between two data points inevitably produces in-between points outside of the manifold . In practice , the decoded images of such points are unpredictable and may produce non-realistic artifacts . This issue is demonstrated in the two right images in Figure 2 . When the interpolated point is on the manifold ( an empty circle denoted ‘ A ’ ) , a faithful image is generated by the decoder ( 2nd image from the right ) . When the interpolated point departs from the manifold ( the circle denoted ‘ B ’ ) , the resulting image is unpredictable ( right image ) . In this paper , we argue that the common statistical view of autoencoders is not appropriate when dealing with data that have been generated from continuous factors . Alternatively , the manifold structure of continuous data must be considered , taking into account the geometry and shape of the manifold . Accordingly , we propose a new interpolation regularization mechanism consisting of an adversarial loss , a cycle-consistency loss , and a smoothness loss . The adversarial loss drives the interpolated points to look realistic as it is optimized against a discriminator that learns to tell apart real from interpolated data points . The cycle-consistency and the smoothness losses encourage smooth interpolations between data points . We show empirically that these combined losses prompt the autoencoder to produce realistic and smooth interpolations while providing a convex latent manifold with a bijective mapping between the input and the latent manifolds . This regularization mechanism not only enables faithful interpolation between data points , but can also be used as a general regularization technique to avoid overfitting or to produce new samples for data augmentation , as suggested , among others , by Zhang et al . ( 2018 ) . To conclude , the contributions of the papers are : I . We define what constitutes an admissible interpolation between two data points on a continuous manifold . In particular we added the cycle-consistency and the smoothness terms and show their importance in generating admissible interpolations . II . We empirically demonstrate how the combination of the four losses ; the reconstruction , adversarial , cycle-consistency and the smoothness losses , contribute to admissible interpolations and produce state of the art results . 2 MANIFOLD DATA INTERPOLATION . Before presenting the proposed approach we would like to define what constitutes a proper interpolation between two data points . There are many possible paths between two points on the manifold . Even if we require the interpolations to be on a geodesic path , there might be infinitely many such paths between two points . Therefore , we relax the geodesic requirement and define less restrictive conditions . Formally , assume we are given a dataset sampled from a target domain X . We are interested in interpolating between two data points xi and xj from X . Let the interpolated points be x̂i→j ( α ) for α ∈ [ 0 , 1 ] and let P ( x ) be the probability that a data point x belongs to X . We define an interpolation to be an admissible interpolation if x̂i→j ( α ) satisfies the following conditions : 1 . Boundary conditions : x̂i→j ( 0 ) = xi and x̂i→j ( 1 ) = xj . 2 . Monotonicity : We require that under some defined distance on the manifold d ( x , x′ ) , the interpolated points will depart from xi and approach xj , as the parameterization α goes from 0 to 1 . Namely , ∀α′ ≥ α , d ( x̂i→j ( α ) , xi ) ≤ d ( x̂i→j ( α′ ) , xi ) and similarly : d ( x̂i→j ( α ′ ) , xj ) ≤ d ( x̂i→j ( α ) , xj ) 3 . Smoothness : The interpolation function x̂i→j ( α ) is Lipschitz continuous with a constant K : ‖x̂i→j ( α ) , x̂i→j ( α+ t ) ‖ ≤ K|t| 4 . Credibility : ∀α ∈ [ 0 , 1 ] We require that it is highly probable that interpolated images , x̂i→j ( α ) belong to X . Namely , P ( x̂i→j ( α ) ) ≥ 1− β , for some constant β ≥ 0 2.1 PROPOSED APPROACH . Following the above definitions for an admissible interpolation , we propose a new approach , called Autoencoder Adversarial Interpolation ( AEAI ) , which shapes the latent space according to the above requirements . The general architecture comprises a standard autoencoder with an encoder , z = f ( x ) , and a decoder x̂ = g ( z ) . We also train a discriminator D ( x ) to differentiate between real and interpolated data points . For pairs of input data points xi , xj , we linearly interpolate between them in the latent space : zi→j ( α ) = ( 1−α ) zi +αzj , where α ∈ [ 0 , 1 ] . The first requirement is that we would like x̂i→j ( α ) = g ( zi→j ( α ) ) to look real and fool the discriminator D. Additionally , we add a cycle-consistency loss that encourages the latent representation of x̂i→j ( α ) to be mapped back into zi→j ( α ) again ; namely , ẑi→j ( α ) = f ( g ( zi→j ( α ) ) ) should be similar to zi→j ( α ) . Finally , we add a smoothness loss that drives the linear parameterization to form a smooth interpolation . Putting everything together we define the loss Li→j between pairs xi and xj as follows : Li→j = Li→jR + λAL i→j A + λCL i→j C + λSL i→j S ( 1 ) where LR , LA , LC , LS are the reconstruction , adversarial , cycle , and smoothness losses , respectively . The first term LR is a standard reconstruction loss and is calculated for the two endpoints xi and xj : Li→jR = L ( xi , x̂i ) + L ( xj , x̂j ) where L ( · , · ) is some loss function between the two images ( we used the L2 distance or the perceptual loss ( Johnson et al. , 2016 ) ) and x̂k = g ( f ( xk ) ) . LA is the adversarial loss that encourages the network to fool the discriminator so that interpolated images are indistinguishable from the data in the target domain X : Li→jA = M∑ n=0 − logD ( x̂i→j ( n/M ) ) where D ( x ) ∈ [ 0 , 1 ] is a discriminator trying to distinguish between images in the training set and the interpolated images . The cycle-consistency loss LC encourages the encoder and the decoder to produce a bijective mapping : Li→jC = M∑ n=0 ‖zi→j ( n/M ) − ẑi→j ( n/M ) ‖2 where ẑi→j ( α ) = f ( g ( zi→j ( α ) ) ) . The last term LS is the smoothness loss encouraging x̂ ( α ) to produce smoothly varying interpolated points between xi and xj : Li→jS = M∑ n=0 ∥∥∥∥∂x̂i→j ( α ) ∂α ∥∥∥∥2 α=n/M where ‖∂x̂i→j ( α ) /∂α‖2α=α0 means that the derivative it taken at α = α0 . The three losses LA , LC and LS are accumulated over M + 1 sampled points , from α = 0/M up to α =M/M . Finally , we sum the Li→j loss over many sampled pairs . In the next section , we explain the motivation for each of the four losses comprising Li→j in Equation 1 and describe how these losses promote the four conditions defined in Section 2 .
This paper introduces several autoencoder (AE) regularization terms that aim at reproducing continuous realistic deformation by interpolating latent codes of images. The authors assume there is a continuous process generating the data and introduce three novel loss terms (in addition to the standard AE reconstruction loss). The first term is a GAN loss for decoded interpolated latents $\hat{x}(\alpha)$ where this terms makes sure the interpolated latents are decoded to images similar to the train images. The second term is called cycle-consistency and enforce injectivity of the decoder. The last term is enforcing smoothness of the decoded image as a function of the interpolated latent. Combining these three losses with the original loss leads to natural interpolations of latents that enjoy both smoothness and realism. The method is tested on a synthetic "pole shadow" example, and COIL 100. The method seem to improve upon several baselines on these datasets.
SP:924990a4586c7570f0b6a9d4f58d94ad8f4f5cc4
Autoencoder Image Interpolation by Shaping the Latent Space
Autoencoders represent an effective approach for computing the underlying factors characterizing datasets of different types . The latent representation of autoencoders have been studied in the context of enabling interpolation between data points by decoding convex combinations of latent vectors . This interpolation , however , often leads to artifacts or produces unrealistic results during reconstruction . We argue that these incongruities are due to the structure of the latent space and because such naively interpolated latent vectors deviate from the data manifold . In this paper , we propose a regularization technique that shapes the latent representation to follow a manifold that is consistent with the training images and that drives the manifold to be smooth and locally convex . This regularization not only enables faithful interpolation between data points , as we show herein , but can also be used as a general regularization technique to avoid overfitting or to produce new samples for data augmentation . 1 INTRODUCTION . Given a set of data points , data interpolation or extrapolation aims at predicting novel data points between given samples ( interpolation ) or predicting novel data outside the sample range ( extrapolation ) . Faithful data interpolation between sampled data can be seen as a measure of the generalization capacity of a learning system ( Berthelot et al. , 2018 ) . In the context of computer vision and computer graphics , data interpolation may refer to generating novel views of an object between two given views or predicting in-between animated frames from key frames . Interpolation that produces novel views of a scene requires input such as the geometric and photometric parameters of existing objects , camera parameters and additional scene components , such as lighting and the reflective characteristics of nearby objects . Unfortunately , these characteristics are not always available or are difficult to extract in real-world scenarios . Thus , in such cases , we can apply data-driven interpolation that is deduced based on a sampled dataset drawn from the scene taken under various acquisition parameters . The task of data interpolation is to extract new samples ( possibly continuous ) between known data samples . Clearly , linear interpolation between two images in the input ( image ) domain does not work as it produces a cross-dissolve effect between the intensities of the two images . Adopting the manifold view of data ( Goodfellow et al. , 2016 ; Verma et al. , 2018 ; Bengio et al. , 2013 ) , this task can be seen as sampling new data points along the geodesic path between the given points . The problem is that this manifold is unknown in advance and one has to approximate it from the given data . Alternatively , adopting the probabilistic perspective , interpolation can be viewed as drawing samples from highly probable areas in the data space . One fascinating property of unsupervised learning is the network ’ s ability to reveal the underlying factors controlling a given dataset . Autoencoders ( Doersch , 2016 ; Kingma & Welling , 2013 ; Goodfellow et al. , 2016 ; Kramer , 1991 ; Vincent et al. , 2010 ) represent an effective approach for exposing these factors . Researchers have demonstrated the ability to interpolate between data points by decoding a convex sum of latent vectors ( Shu et al. , 2018 ; Mathieu et al. , 2016 ) ; however , this interpolation often incorporates visible artifacts during reconstruction . To illustrate the problem , consider the following example : A scene is composed of a vertical pole at the center of a flat plane ( Figure 1-left ) . A single light source illuminates the scene and accordingly , the pole projects a shadow onto the plane . The position of the light source can vary along the upper hemisphere . Hence , the underlying parameters controlling the generated scene are ( θ , φ ) , the elevation and azimuth , respectively . The interaction between the light and the pole produces a cast shadow whose direction and length are determined by the light direction . A set of images of this scene is acquired from a fixed viewing position ( from above ) with various lighting directions . Our goal in this example is to train a model that is capable of interpolating between two given images . Figure 1 , top row , depicts a set of interpolated images , between the source image ( left image ) and the target image ( right image ) , where the interpolation is performed in the input domain . As illustrated , the interpolation is not natural as it produces cross-dissolve effects in image intensities . Training a standard autoencoder and applying linear interpolation in its latent space generates images that are much more realistic ( Figure 1 , bottom row ) . Nevertheless , this interpolation is not perfect as visible artifacts occur in the interpolated images . The source of these artifacts can be investigated by closely inspecting the 2D manifold embedded in the latent space . Figure 2 shows two manifolds embedded in latent spaces , one with data embedded in 2D latent space ( left plot ) and one with data embedded in 3D latent space ( 2nd plot from the left ) . In both cases , the manifolds are 2D and are generated using vanilla autoencoders . The grid lines represent the ( θ , φ ) parameterization . It can be seen that the encoders produce non-smooth and non-convex surfaces in 2D as well as in 3D . Thus , linear interpolation between two data points inevitably produces in-between points outside of the manifold . In practice , the decoded images of such points are unpredictable and may produce non-realistic artifacts . This issue is demonstrated in the two right images in Figure 2 . When the interpolated point is on the manifold ( an empty circle denoted ‘ A ’ ) , a faithful image is generated by the decoder ( 2nd image from the right ) . When the interpolated point departs from the manifold ( the circle denoted ‘ B ’ ) , the resulting image is unpredictable ( right image ) . In this paper , we argue that the common statistical view of autoencoders is not appropriate when dealing with data that have been generated from continuous factors . Alternatively , the manifold structure of continuous data must be considered , taking into account the geometry and shape of the manifold . Accordingly , we propose a new interpolation regularization mechanism consisting of an adversarial loss , a cycle-consistency loss , and a smoothness loss . The adversarial loss drives the interpolated points to look realistic as it is optimized against a discriminator that learns to tell apart real from interpolated data points . The cycle-consistency and the smoothness losses encourage smooth interpolations between data points . We show empirically that these combined losses prompt the autoencoder to produce realistic and smooth interpolations while providing a convex latent manifold with a bijective mapping between the input and the latent manifolds . This regularization mechanism not only enables faithful interpolation between data points , but can also be used as a general regularization technique to avoid overfitting or to produce new samples for data augmentation , as suggested , among others , by Zhang et al . ( 2018 ) . To conclude , the contributions of the papers are : I . We define what constitutes an admissible interpolation between two data points on a continuous manifold . In particular we added the cycle-consistency and the smoothness terms and show their importance in generating admissible interpolations . II . We empirically demonstrate how the combination of the four losses ; the reconstruction , adversarial , cycle-consistency and the smoothness losses , contribute to admissible interpolations and produce state of the art results . 2 MANIFOLD DATA INTERPOLATION . Before presenting the proposed approach we would like to define what constitutes a proper interpolation between two data points . There are many possible paths between two points on the manifold . Even if we require the interpolations to be on a geodesic path , there might be infinitely many such paths between two points . Therefore , we relax the geodesic requirement and define less restrictive conditions . Formally , assume we are given a dataset sampled from a target domain X . We are interested in interpolating between two data points xi and xj from X . Let the interpolated points be x̂i→j ( α ) for α ∈ [ 0 , 1 ] and let P ( x ) be the probability that a data point x belongs to X . We define an interpolation to be an admissible interpolation if x̂i→j ( α ) satisfies the following conditions : 1 . Boundary conditions : x̂i→j ( 0 ) = xi and x̂i→j ( 1 ) = xj . 2 . Monotonicity : We require that under some defined distance on the manifold d ( x , x′ ) , the interpolated points will depart from xi and approach xj , as the parameterization α goes from 0 to 1 . Namely , ∀α′ ≥ α , d ( x̂i→j ( α ) , xi ) ≤ d ( x̂i→j ( α′ ) , xi ) and similarly : d ( x̂i→j ( α ′ ) , xj ) ≤ d ( x̂i→j ( α ) , xj ) 3 . Smoothness : The interpolation function x̂i→j ( α ) is Lipschitz continuous with a constant K : ‖x̂i→j ( α ) , x̂i→j ( α+ t ) ‖ ≤ K|t| 4 . Credibility : ∀α ∈ [ 0 , 1 ] We require that it is highly probable that interpolated images , x̂i→j ( α ) belong to X . Namely , P ( x̂i→j ( α ) ) ≥ 1− β , for some constant β ≥ 0 2.1 PROPOSED APPROACH . Following the above definitions for an admissible interpolation , we propose a new approach , called Autoencoder Adversarial Interpolation ( AEAI ) , which shapes the latent space according to the above requirements . The general architecture comprises a standard autoencoder with an encoder , z = f ( x ) , and a decoder x̂ = g ( z ) . We also train a discriminator D ( x ) to differentiate between real and interpolated data points . For pairs of input data points xi , xj , we linearly interpolate between them in the latent space : zi→j ( α ) = ( 1−α ) zi +αzj , where α ∈ [ 0 , 1 ] . The first requirement is that we would like x̂i→j ( α ) = g ( zi→j ( α ) ) to look real and fool the discriminator D. Additionally , we add a cycle-consistency loss that encourages the latent representation of x̂i→j ( α ) to be mapped back into zi→j ( α ) again ; namely , ẑi→j ( α ) = f ( g ( zi→j ( α ) ) ) should be similar to zi→j ( α ) . Finally , we add a smoothness loss that drives the linear parameterization to form a smooth interpolation . Putting everything together we define the loss Li→j between pairs xi and xj as follows : Li→j = Li→jR + λAL i→j A + λCL i→j C + λSL i→j S ( 1 ) where LR , LA , LC , LS are the reconstruction , adversarial , cycle , and smoothness losses , respectively . The first term LR is a standard reconstruction loss and is calculated for the two endpoints xi and xj : Li→jR = L ( xi , x̂i ) + L ( xj , x̂j ) where L ( · , · ) is some loss function between the two images ( we used the L2 distance or the perceptual loss ( Johnson et al. , 2016 ) ) and x̂k = g ( f ( xk ) ) . LA is the adversarial loss that encourages the network to fool the discriminator so that interpolated images are indistinguishable from the data in the target domain X : Li→jA = M∑ n=0 − logD ( x̂i→j ( n/M ) ) where D ( x ) ∈ [ 0 , 1 ] is a discriminator trying to distinguish between images in the training set and the interpolated images . The cycle-consistency loss LC encourages the encoder and the decoder to produce a bijective mapping : Li→jC = M∑ n=0 ‖zi→j ( n/M ) − ẑi→j ( n/M ) ‖2 where ẑi→j ( α ) = f ( g ( zi→j ( α ) ) ) . The last term LS is the smoothness loss encouraging x̂ ( α ) to produce smoothly varying interpolated points between xi and xj : Li→jS = M∑ n=0 ∥∥∥∥∂x̂i→j ( α ) ∂α ∥∥∥∥2 α=n/M where ‖∂x̂i→j ( α ) /∂α‖2α=α0 means that the derivative it taken at α = α0 . The three losses LA , LC and LS are accumulated over M + 1 sampled points , from α = 0/M up to α =M/M . Finally , we sum the Li→j loss over many sampled pairs . In the next section , we explain the motivation for each of the four losses comprising Li→j in Equation 1 and describe how these losses promote the four conditions defined in Section 2 .
This paper focused on developing a new regularization technique for autoencoders, which shapes the latent representation to follow a manifold that is consistent with the training images and that drives the manifold to be smooth and locally convex. The authors suggest that the manifold structure of continuous data must be considered to include the geometry and shape of the manifold. The new interpolation regularization mechanism consists of an adversarial loss, a cycle-consistency loss, and a smoothness loss. So the architecture of the proposed model includes a standard autoencoder, a discriminator and the loss mentioned above.
SP:924990a4586c7570f0b6a9d4f58d94ad8f4f5cc4
Sharpness-aware Minimization for Efficiently Improving Generalization
1 INTRODUCTION . Modern machine learning ’ s success in achieving ever better performance on a wide range of tasks has relied in significant part on ever heavier overparameterization , in conjunction with developing ever more effective training algorithms that are able to find parameters that generalize well . Indeed , many modern neural networks can easily memorize the training data and have the capacity to readily overfit ( Zhang et al. , 2016 ) . Such heavy overparameterization is currently required to achieve stateof-the-art results in a variety of domains ( Tan & Le , 2019 ; Kolesnikov et al. , 2020 ; Huang et al. , 2018 ) . In turn , it is essential that such models be trained using procedures that ensure that the parameters actually selected do in fact generalize beyond the training set . Unfortunately , simply minimizing commonly used loss functions ( e.g. , cross-entropy ) on the training set is typically not sufficient to achieve satisfactory generalization . The training loss landscapes of today ’ s models are commonly complex and non-convex , with a multiplicity of local and global minima , and with different global minima yielding models with different generalization abilities ( Shirish Keskar et al. , 2016 ) . As a result , the choice of optimizer ( and associated optimizer settings ) from among the many available ( e.g. , stochastic gradient descent ( Nesterov , 1983 ) , Adam ( Kingma & Ba , 2014 ) , RMSProp ( Hinton et al . ) , and others ( Duchi et al. , 2011 ; Dozat , 2016 ; Martens & Grosse , 2015 ) ) has become an important design choice , though understanding of its relationship to model generalization remains nascent ( Shirish Keskar et al. , 2016 ; Wilson et al. , 2017 ; Shirish Keskar & Socher , 2017 ; Agarwal et al. , 2020 ; Jacot et al. , 2018 ) . Relatedly , a panoply of methods for modifying the training process have been proposed , including dropout ( Srivastava et al. , 2014 ) , ∗Work done as part of the Google AI Residency program . batch normalization ( Ioffe & Szegedy , 2015 ) , stochastic depth ( Huang et al. , 2016 ) , data augmentation ( Cubuk et al. , 2018 ) , and mixed sample augmentations ( Zhang et al. , 2017 ; Harris et al. , 2020 ) . The connection between the geometry of the loss landscape—in particular , the flatness of minima— and generalization has been studied extensively from both theoretical and empirical perspectives ( Shirish Keskar et al. , 2016 ; Dziugaite & Roy , 2017 ; Jiang et al. , 2019 ) . While this connection has held the promise of enabling new approaches to model training that yield better generalization , practical efficient algorithms that specifically seek out flatter minima and furthermore effectively improve generalization on a range of state-of-the-art models have thus far been elusive ( e.g. , see ( Chaudhari et al. , 2016 ; Izmailov et al. , 2018 ) ; we include a more detailed discussion of prior work in Section 5 ) . We present here a new efficient , scalable , and effective approach to improving model generalization ability that directly leverages the geometry of the loss landscape and its connection to generalization , and is powerfully complementary to existing techniques . In particular , we make the following contributions : • We introduce Sharpness-Aware Minimization ( SAM ) , a novel procedure that improves model generalization by simultaneously minimizing loss value and loss sharpness . SAM functions by seeking parameters that lie in neighborhoods having uniformly low loss value ( rather than parameters that only themselves have low loss value , as illustrated in the middle and righthand images of Figure 1 ) , and can be implemented efficiently and easily . • We show via a rigorous empirical study that using SAM improves model generalization ability across a range of widely studied computer vision tasks ( e.g. , CIFAR- { 10 , 100 } , ImageNet , finetuning tasks ) and models , as summarized in the lefthand plot of Figure 1 . For example , applying SAM yields novel state-of-the-art performance for a number of alreadyintensely-studied tasks , such as ImageNet , CIFAR- { 10 , 100 } , SVHN , Fashion-MNIST , and the standard set of image classification finetuning tasks ( e.g. , Flowers , Stanford Cars , Oxford Pets , etc ) . • We show that SAM furthermore provides robustness to label noise on par with that provided by state-of-the-art procedures that specifically target learning with noisy labels . • Through the lens provided by SAM , we further elucidate the connection between loss sharpness and generalization by surfacing a promising new notion of sharpness , which we term m-sharpness . Section 2 below derives the SAM procedure and presents the resulting algorithm in full detail . Section 3 evaluates SAM empirically , and Section 4 further analyzes the connection between loss sharpness and generalization through the lens of SAM . Finally , we conclude with an overview of related work and a discussion of conclusions and future work in Sections 5 and 6 , respectively . 2 SHARPNESS-AWARE MINIMIZATION ( SAM ) . Throughout the paper , we denote scalars as a , vectors as a , matrices asA , sets asA , and equality by definition as , . Given a training dataset S , ∪ni=1 { ( xi , yi ) } drawn i.i.d . from distribution D , we seek to learn a model that generalizes well . In particular , consider a family of models parameterized byw ∈ W ⊆ Rd ; given a per-data-point loss function l : W×X ×Y → R+ , we define the training set loss LS ( w ) , 1n ∑n i=1 l ( w , xi , yi ) and the population loss LD ( w ) , E ( x , y ) ∼D [ l ( w , x , y ) ] . Having observed only S , the goal of model training is to select model parameters w having low population loss LD ( w ) . Utilizing LS ( w ) as an estimate of LD ( w ) motivates the standard approach of selecting parameters w by solving minw LS ( w ) ( possibly in conjunction with a regularizer onw ) using an optimization procedure such as SGD or Adam . Unfortunately , however , for modern overparameterized models such as deep neural networks , typical optimization approaches can easily result in suboptimal performance at test time . In particular , for modern models , LS ( w ) is typically non-convex in w , with multiple local and even global minima that may yield similar values of LS ( w ) while having significantly different generalization performance ( i.e. , significantly different values of LD ( w ) ) . Motivated by the connection between sharpness of the loss landscape and generalization , we propose a different approach : rather than seeking out parameter values w that simply have low training loss valueLS ( w ) , we seek out parameter values whose entire neighborhoods have uniformly low training loss value ( equivalently , neighborhoods having both low loss and low curvature ) . The following theorem illustrates the motivation for this approach by bounding generalization ability in terms of neighborhood-wise training loss ( full theorem statement and proof in Appendix A ) : Theorem ( stated informally ) 1 . For any ρ > 0 , with high probability over training set S generated from distribution D , LD ( w ) ≤ max ‖ ‖2≤ρ LS ( w + ) + h ( ‖w‖22/ρ2 ) , where h : R+ → R+ is a strictly increasing function ( under some technical conditions on LD ( w ) ) . To make explicit our sharpness term , we can rewrite the right hand side of the inequality above as [ max ‖ ‖2≤ρ LS ( w + ) − LS ( w ) ] + LS ( w ) + h ( ‖w‖22/ρ2 ) . The term in square brackets captures the sharpness ofLS atw by measuring how quickly the training loss can be increased by moving from w to a nearby parameter value ; this sharpness term is then summed with the training loss value itself and a regularizer on the magnitude of w. Given that the specific function h is heavily influenced by the details of the proof , we substitute the second term with λ||w||22 for a hyperparameter λ , yielding a standard L2 regularization term . Thus , inspired by the terms from the bound , we propose to select parameter values by solving the following SharpnessAware Minimization ( SAM ) problem : min w LSAMS ( w ) + λ||w||22 where LSAMS ( w ) , max|| ||p≤ρ LS ( w + ) , ( 1 ) where ρ ≥ 0 is a hyperparameter and p ∈ [ 1 , ∞ ] ( we have generalized slightly from an L2-norm to a p-norm in the maximization over , though we show empirically in appendix C.5 that p = 2 is typically optimal ) . Figure 1 shows1 the loss landscape for a model that converged to minima found by minimizing either LS ( w ) or LSAMS ( w ) , illustrating that the sharpness-aware loss prevents the model from converging to a sharp minimum . In order to minimize LSAMS ( w ) , we derive an efficient and effective approximation to ∇wLSAMS ( w ) by differentiating through the inner maximization , which in turn enables us to apply stochastic gradient descent directly to the SAM objective . Proceeding down this path , we first approximate the inner maximization problem via a first-order Taylor expansion of LS ( w + ) w.r.t . around 0 , obtaining ∗ ( w ) , argmax ‖ ‖p≤ρ LS ( w + ) ≈ argmax ‖ ‖p≤ρ LS ( w ) + T∇wLS ( w ) = argmax ‖ ‖p≤ρ T∇wLS ( w ) . 1Figure 1 was generated following Li et al . ( 2017 ) with the provided ResNet56 ( no residual connections ) checkpoint , and training the same model with SAM . In turn , the value ̂ ( w ) that solves this approximation is given by the solution to a classical dual norm problem ( | · |q−1 denotes elementwise absolute value and power ) 2 : ̂ ( w ) = ρ sign ( ∇wLS ( w ) ) |∇wLS ( w ) |q−1 / ( ‖∇wLS ( w ) ‖qq ) 1/p ( 2 ) where 1/p+ 1/q = 1 . Substituting back into equation ( 1 ) and differentiating , we then have ∇wLSAMS ( w ) ≈ ∇wLS ( w + ̂ ( w ) ) = d ( w + ̂ ( w ) ) dw ∇wLS ( w ) |w+̂ ( w ) = ∇wLS ( w ) |w+̂ ( w ) + d̂ ( w ) dw ∇wLS ( w ) |w+̂ ( w ) . This approximation to ∇wLSAMS ( w ) can be straightforwardly computed via automatic differentiation , as implemented in common libraries such as JAX , TensorFlow , and PyTorch . Though this computation implicitly depends on the Hessian ofLS ( w ) because ̂ ( w ) is itself a function of∇wLS ( w ) , the Hessian enters only via Hessian-vector products , which can be computed tractably without materializing the Hessian matrix . Nonetheless , to further accelerate the computation , we drop the second-order terms . obtaining our final gradient approximation : ∇wLSAMS ( w ) ≈ ∇wLS ( w ) |w+̂ ( w ) . ( 3 ) As shown by the results in Section 3 , this approximation ( without the second-order terms ) yields an effective algorithm . In Appendix C.4 , we additionally investigate the effect of instead including the second-order terms ; in that initial experiment , including them surprisingly degrades performance , and further investigating these terms ’ effect should be a priority in future work . We obtain the final SAM algorithm by applying a standard numerical optimizer such as stochastic gradient descent ( SGD ) to the SAM objective LSAMS ( w ) , using equation 3 to compute the requisite objective function gradients . Algorithm 1 gives pseudo-code for the full SAM algorithm , using SGD as the base optimizer , and Figure 2 schematically illustrates a single SAM parameter update . end returnwt Algorithm 1 : SAM algorithm
Motivated by the connection between the flatness of minima and its generalization ability, the authors propose Sharpness-aware Minimization (SAM), which explicitly minimizes both loss value and loss sharpness during training deep neural networks. They find SAM improves generalization for a range of image classification tasks and provide robustness to label noise as well. They also introduce a new notion of sharpness named m-sharpness.
SP:4575567f743dfddde8d82d911115cf806f78042f
Sharpness-aware Minimization for Efficiently Improving Generalization
1 INTRODUCTION . Modern machine learning ’ s success in achieving ever better performance on a wide range of tasks has relied in significant part on ever heavier overparameterization , in conjunction with developing ever more effective training algorithms that are able to find parameters that generalize well . Indeed , many modern neural networks can easily memorize the training data and have the capacity to readily overfit ( Zhang et al. , 2016 ) . Such heavy overparameterization is currently required to achieve stateof-the-art results in a variety of domains ( Tan & Le , 2019 ; Kolesnikov et al. , 2020 ; Huang et al. , 2018 ) . In turn , it is essential that such models be trained using procedures that ensure that the parameters actually selected do in fact generalize beyond the training set . Unfortunately , simply minimizing commonly used loss functions ( e.g. , cross-entropy ) on the training set is typically not sufficient to achieve satisfactory generalization . The training loss landscapes of today ’ s models are commonly complex and non-convex , with a multiplicity of local and global minima , and with different global minima yielding models with different generalization abilities ( Shirish Keskar et al. , 2016 ) . As a result , the choice of optimizer ( and associated optimizer settings ) from among the many available ( e.g. , stochastic gradient descent ( Nesterov , 1983 ) , Adam ( Kingma & Ba , 2014 ) , RMSProp ( Hinton et al . ) , and others ( Duchi et al. , 2011 ; Dozat , 2016 ; Martens & Grosse , 2015 ) ) has become an important design choice , though understanding of its relationship to model generalization remains nascent ( Shirish Keskar et al. , 2016 ; Wilson et al. , 2017 ; Shirish Keskar & Socher , 2017 ; Agarwal et al. , 2020 ; Jacot et al. , 2018 ) . Relatedly , a panoply of methods for modifying the training process have been proposed , including dropout ( Srivastava et al. , 2014 ) , ∗Work done as part of the Google AI Residency program . batch normalization ( Ioffe & Szegedy , 2015 ) , stochastic depth ( Huang et al. , 2016 ) , data augmentation ( Cubuk et al. , 2018 ) , and mixed sample augmentations ( Zhang et al. , 2017 ; Harris et al. , 2020 ) . The connection between the geometry of the loss landscape—in particular , the flatness of minima— and generalization has been studied extensively from both theoretical and empirical perspectives ( Shirish Keskar et al. , 2016 ; Dziugaite & Roy , 2017 ; Jiang et al. , 2019 ) . While this connection has held the promise of enabling new approaches to model training that yield better generalization , practical efficient algorithms that specifically seek out flatter minima and furthermore effectively improve generalization on a range of state-of-the-art models have thus far been elusive ( e.g. , see ( Chaudhari et al. , 2016 ; Izmailov et al. , 2018 ) ; we include a more detailed discussion of prior work in Section 5 ) . We present here a new efficient , scalable , and effective approach to improving model generalization ability that directly leverages the geometry of the loss landscape and its connection to generalization , and is powerfully complementary to existing techniques . In particular , we make the following contributions : • We introduce Sharpness-Aware Minimization ( SAM ) , a novel procedure that improves model generalization by simultaneously minimizing loss value and loss sharpness . SAM functions by seeking parameters that lie in neighborhoods having uniformly low loss value ( rather than parameters that only themselves have low loss value , as illustrated in the middle and righthand images of Figure 1 ) , and can be implemented efficiently and easily . • We show via a rigorous empirical study that using SAM improves model generalization ability across a range of widely studied computer vision tasks ( e.g. , CIFAR- { 10 , 100 } , ImageNet , finetuning tasks ) and models , as summarized in the lefthand plot of Figure 1 . For example , applying SAM yields novel state-of-the-art performance for a number of alreadyintensely-studied tasks , such as ImageNet , CIFAR- { 10 , 100 } , SVHN , Fashion-MNIST , and the standard set of image classification finetuning tasks ( e.g. , Flowers , Stanford Cars , Oxford Pets , etc ) . • We show that SAM furthermore provides robustness to label noise on par with that provided by state-of-the-art procedures that specifically target learning with noisy labels . • Through the lens provided by SAM , we further elucidate the connection between loss sharpness and generalization by surfacing a promising new notion of sharpness , which we term m-sharpness . Section 2 below derives the SAM procedure and presents the resulting algorithm in full detail . Section 3 evaluates SAM empirically , and Section 4 further analyzes the connection between loss sharpness and generalization through the lens of SAM . Finally , we conclude with an overview of related work and a discussion of conclusions and future work in Sections 5 and 6 , respectively . 2 SHARPNESS-AWARE MINIMIZATION ( SAM ) . Throughout the paper , we denote scalars as a , vectors as a , matrices asA , sets asA , and equality by definition as , . Given a training dataset S , ∪ni=1 { ( xi , yi ) } drawn i.i.d . from distribution D , we seek to learn a model that generalizes well . In particular , consider a family of models parameterized byw ∈ W ⊆ Rd ; given a per-data-point loss function l : W×X ×Y → R+ , we define the training set loss LS ( w ) , 1n ∑n i=1 l ( w , xi , yi ) and the population loss LD ( w ) , E ( x , y ) ∼D [ l ( w , x , y ) ] . Having observed only S , the goal of model training is to select model parameters w having low population loss LD ( w ) . Utilizing LS ( w ) as an estimate of LD ( w ) motivates the standard approach of selecting parameters w by solving minw LS ( w ) ( possibly in conjunction with a regularizer onw ) using an optimization procedure such as SGD or Adam . Unfortunately , however , for modern overparameterized models such as deep neural networks , typical optimization approaches can easily result in suboptimal performance at test time . In particular , for modern models , LS ( w ) is typically non-convex in w , with multiple local and even global minima that may yield similar values of LS ( w ) while having significantly different generalization performance ( i.e. , significantly different values of LD ( w ) ) . Motivated by the connection between sharpness of the loss landscape and generalization , we propose a different approach : rather than seeking out parameter values w that simply have low training loss valueLS ( w ) , we seek out parameter values whose entire neighborhoods have uniformly low training loss value ( equivalently , neighborhoods having both low loss and low curvature ) . The following theorem illustrates the motivation for this approach by bounding generalization ability in terms of neighborhood-wise training loss ( full theorem statement and proof in Appendix A ) : Theorem ( stated informally ) 1 . For any ρ > 0 , with high probability over training set S generated from distribution D , LD ( w ) ≤ max ‖ ‖2≤ρ LS ( w + ) + h ( ‖w‖22/ρ2 ) , where h : R+ → R+ is a strictly increasing function ( under some technical conditions on LD ( w ) ) . To make explicit our sharpness term , we can rewrite the right hand side of the inequality above as [ max ‖ ‖2≤ρ LS ( w + ) − LS ( w ) ] + LS ( w ) + h ( ‖w‖22/ρ2 ) . The term in square brackets captures the sharpness ofLS atw by measuring how quickly the training loss can be increased by moving from w to a nearby parameter value ; this sharpness term is then summed with the training loss value itself and a regularizer on the magnitude of w. Given that the specific function h is heavily influenced by the details of the proof , we substitute the second term with λ||w||22 for a hyperparameter λ , yielding a standard L2 regularization term . Thus , inspired by the terms from the bound , we propose to select parameter values by solving the following SharpnessAware Minimization ( SAM ) problem : min w LSAMS ( w ) + λ||w||22 where LSAMS ( w ) , max|| ||p≤ρ LS ( w + ) , ( 1 ) where ρ ≥ 0 is a hyperparameter and p ∈ [ 1 , ∞ ] ( we have generalized slightly from an L2-norm to a p-norm in the maximization over , though we show empirically in appendix C.5 that p = 2 is typically optimal ) . Figure 1 shows1 the loss landscape for a model that converged to minima found by minimizing either LS ( w ) or LSAMS ( w ) , illustrating that the sharpness-aware loss prevents the model from converging to a sharp minimum . In order to minimize LSAMS ( w ) , we derive an efficient and effective approximation to ∇wLSAMS ( w ) by differentiating through the inner maximization , which in turn enables us to apply stochastic gradient descent directly to the SAM objective . Proceeding down this path , we first approximate the inner maximization problem via a first-order Taylor expansion of LS ( w + ) w.r.t . around 0 , obtaining ∗ ( w ) , argmax ‖ ‖p≤ρ LS ( w + ) ≈ argmax ‖ ‖p≤ρ LS ( w ) + T∇wLS ( w ) = argmax ‖ ‖p≤ρ T∇wLS ( w ) . 1Figure 1 was generated following Li et al . ( 2017 ) with the provided ResNet56 ( no residual connections ) checkpoint , and training the same model with SAM . In turn , the value ̂ ( w ) that solves this approximation is given by the solution to a classical dual norm problem ( | · |q−1 denotes elementwise absolute value and power ) 2 : ̂ ( w ) = ρ sign ( ∇wLS ( w ) ) |∇wLS ( w ) |q−1 / ( ‖∇wLS ( w ) ‖qq ) 1/p ( 2 ) where 1/p+ 1/q = 1 . Substituting back into equation ( 1 ) and differentiating , we then have ∇wLSAMS ( w ) ≈ ∇wLS ( w + ̂ ( w ) ) = d ( w + ̂ ( w ) ) dw ∇wLS ( w ) |w+̂ ( w ) = ∇wLS ( w ) |w+̂ ( w ) + d̂ ( w ) dw ∇wLS ( w ) |w+̂ ( w ) . This approximation to ∇wLSAMS ( w ) can be straightforwardly computed via automatic differentiation , as implemented in common libraries such as JAX , TensorFlow , and PyTorch . Though this computation implicitly depends on the Hessian ofLS ( w ) because ̂ ( w ) is itself a function of∇wLS ( w ) , the Hessian enters only via Hessian-vector products , which can be computed tractably without materializing the Hessian matrix . Nonetheless , to further accelerate the computation , we drop the second-order terms . obtaining our final gradient approximation : ∇wLSAMS ( w ) ≈ ∇wLS ( w ) |w+̂ ( w ) . ( 3 ) As shown by the results in Section 3 , this approximation ( without the second-order terms ) yields an effective algorithm . In Appendix C.4 , we additionally investigate the effect of instead including the second-order terms ; in that initial experiment , including them surprisingly degrades performance , and further investigating these terms ’ effect should be a priority in future work . We obtain the final SAM algorithm by applying a standard numerical optimizer such as stochastic gradient descent ( SGD ) to the SAM objective LSAMS ( w ) , using equation 3 to compute the requisite objective function gradients . Algorithm 1 gives pseudo-code for the full SAM algorithm , using SGD as the base optimizer , and Figure 2 schematically illustrates a single SAM parameter update . end returnwt Algorithm 1 : SAM algorithm
This paper proposes and empirically evaluates SAM, an optimization method that is designed to seek out regions of uniformly low training loss. The method is derived from a bound on the generalization performance of parameters $w$ in terms of the maximal training loss in a region around $w$. After various approximations, minimizing this upper bound gives rise to a simple method, which first performs a (normalized) gradient ascent step; computes the gradient at that perturbed location; and uses that gradient to update the weights. The empirical evaluation shows that SAM improves generalization performance across a wide range of settings.
SP:4575567f743dfddde8d82d911115cf806f78042f
Continual Prototype Evolution: Learning Online from Non-Stationary Data Streams
1 INTRODUCTION . The prevalence of data streams in contemporary applications urges systems to learn in a continual fashion . Autonomous vehicles , sensory robot data , and video streaming yield never-ending streams of data , with abrupt changes in the observed environment behind every vehicle turn , robot entering a new room , or camera cut to a subsequent scene . Alas , learning from streaming data is far from trivial due to these changes , as neural networks tend to forget the knowledge they previously acquired . The data stream presented to the network is not identically and independently distributed ( iid ) , emanating a trade-off between neural stability to retain the current state of knowledge and neural plasticity to swiftly adopt the new knowledge ( Grossberg , 1982 ) . Finding the balance in this stability-plasticity dilemma addresses the catastrophic forgetting ( French , 1999 ) induced by the non-iid intrinsics of the data stream , and is considered the main hurdle for continually learning systems . Although a lot of progress has been established in the literature , often strong assumptions apply , impeding applicability for real-world systems . The static training and testing paradigms prevail , whereas a true continual learner should enable both simultaneously and independently . Therefore , we propose the two-agent learner-evaluator framework to redefine perspective on existing paradigms in the field . Within this framework , we introduce data incremental learning , enabling completely task-free learning and evaluation . Furthermore , we introduce Continual Prototype Evolution ( CoPE ) , a new online data incremental learner wherein prototypes perpetually represent the most salient features of the class population , shifting the catastrophic forgetting problem from the full network parameter space to the lowerdimensional latent space . As a first , our prototypes evolve continually with the data stream , enabling learning and evaluation at any point in time . Similar to representativeness heuristics in human cognition ( Kahneman & Tversky , 1972 ) , the class prototypes are the cornerstone for nearest neighbor classification . Additionally , the system is robust to highly imbalanced data streams by the combination of replay with a balancing memory population scheme . We find batch information in the latent space to have a significant advantage in the challenging non-stationary and online processing regime , which we incorporate in the novel pseudo-prototypical proxy loss . 2 THE LEARNER-EVALUATOR FRAMEWORK . To date , the paradigms of task , class , and domain incremental learning ( van de Ven & Tolias , 2018 ) dominate the continual learning literature . However , strong and differing assumptions often lead to confusion and overlap between implementations of these definitions . Furthermore , the concept of a static training and testing phase is still ubiquitous , whereas continual learning systems should enable both phases continually and independently . Therefore , we propose a generalizing framework which disentangles the continually learning system into two agents : the learner and the evaluator . Figure 1 presents an overview of the framework . The learning agent learns predicting function fθ : X → Y parameterized by θ , mapping the input space X to the target output space Y . The learner receives data samples ( xi , yi ) from stream S and has simultaneous access to the horizon D , i.e . the observable subset of stream S which can be processed for multiple iterations . Data sample i is constituted by input feature xi ∈ X and corresponding ( self- ) supervision signal yi for which the output space for classification is defined as a discrete set of observed classes Yi ← Yi−1 ∪ { yi } . To manage memory usage and to enable multiple updates and stochasticity in the optimization process , updates for θ are typically performed based on a small-scale processing batch B ⊆ D. The data and size of the horizon D are determined by the specific setup or application , ranging from standard offline learning with D = S to online continual learning with D = B . Furthermore , the learner might need additional resources after observing data from B ⊆ D , such as stored samples or model copies , confined by the operational memoryM . The evaluating agent acts independently from the learner by evaluating fθ with horizon Deval from the evaluation stream Seval , with small-scale processing batches Beval ⊆ Deval . This stream can contain yet unobserved concepts by the learner in S to measure zero-shot performance . The framework provides leeway for the concept distributions in Seval being either static or dynamically evolving , determining how performance of the learner is measured . On the one hand , static concept distributions can measure the degree to which the knowledge of learned concepts is preserved , as commonly used in continual learning . On the other hand , evolving concept distributions measure performance for the current distribution in horizon Deval only , where concepts might drift from their original representation , also known as concept drift ( Schlimmer & Granger , 1986 ) . Evaluation can occur asynchronously on-demand or periodically with periodicity ρ determining the resolution of the evaluation samples . Task , class , and domain incremental learning are based on the composition in the learner for the observable stream subset in horizon Dt , which is incrementally replaced by a new subset of data for the new task , set of classes , or domain , with t the identifier of the present data subset . Task incremental learning assumes both learner and evaluator to get data ( xi , yi , ti ) with ti+1 ≥ ti and the horizon spanning all data of a given task with Dt = { ( xi , yi , ti ) ∈ S | ti = t } ( De Lange et al. , 2019 ; van de Ven & Tolias , 2019 ) . Having explicit access to ti confines prediction to an isolated output space . Similarly , in class incremental learning the learner implicitly requires ti to identify the transitions of D , when observing new batches of classes ( Rebuffi et al. , 2017 ; Castro et al. , 2018 ; Shmelkov et al. , 2017 ; Wu et al. , 2018 ) . However , the evaluator considers the entire output space without the need for identifier t. Domain incremental learning holds the same assumptions as class incremental learning , with concepts drifting from one domain to the other for a typically fixed output space , exemplified by the widely used permuted-MNIST setup ( Goodfellow et al. , 2013 ) . Data incremental learning is a more general paradigm we introduce to facilitate learning from any data stream , with no assumption but to observe data incrementally . In contrast to existing paradigms , when the learner observes horizon D of data stream S , data incremental learning does not disclose an identifier t. Consequently , there is no explicit indication to which subset of the stream is being observed in the horizon D. Therefore , the learner either processes observed data directly in an online fashion with processing batch B = D , or infers an implicit identifier t from statistics in stream S. Similar to class and domain incremental learning , the evaluator operates without t on the full output space . This paradigm endows continually learning systems with increased practical use , as real-world streaming applications often lack supervision signal t. Moreover , even if t is provided , this would introduce a bias in the fixed choice of the supervisor , rather than dynamically determined based on the needs of the system . 3 PRIOR WORK . Continually learning systems are able to learn with limited resources from data streams prone to severe distribution shifts . The main body of works presumes the presence of tasks , which divide the data streams into large discrete batches , and are indicated to the learner with a task identifier ( Kirkpatrick et al. , 2017 ; Li & Hoiem , 2017 ; Zenke et al. , 2017 ; Aljundi et al. , 2018 ; De Lange et al. , 2020 ) . Replay methods retain representative data for observed data distributions , currently unavailable in the learner ’ s horizon D. The replay data is either obtained directly from operational memoryM with stored samples ( Rebuffi et al. , 2017 ; Lopez-Paz & Ranzato , 2017 ) or generated using generative models ( Shin et al. , 2017 ; Kamra et al. , 2017 ; Seff et al. , 2017 ; Wu et al. , 2018 ) . GEM ( Lopez-Paz & Ranzato , 2017 ) uses replay in a constraint optimization perspective to project gradients towards a local joint task optimum . iCaRL ( Rebuffi et al. , 2017 ) employs exemplars to distill knowledge ( Hinton et al. , 2015 ) to the learner from a previous model version , with new class exemplars stored in a queue to optimally represent the class mean in feature space . The prototypes are then used for nearest neighbor prediction by the evaluator , in the same vein as concurrent work to ours ( Han et al. , 2020 ) . Nonetheless , all three works strongly rely on task identifier t for the learner , mostly unavailable for real-world data streams . Moreover , in both prototypical approaches the prototypes remain static between the given task transitions and become outdated . Consequently , before using the evaluator they have to exhaustively recalculate the prototypes based on all exemplars in memory . In contrast , our prototypes evolve in an online fashion with the data stream and remain representative for the continual learner and evaluator at all times . Recent works focus on online data incremental learning ( Section 2 ) in which the learner operates completely task-free . Reservoir ( Vitter , 1985 ) is a replay baseline with strong potential to outperform continual learning methods ( Chaudhry et al. , 2019 ) . Samples are stored in memoryMwith probability M/n , with n the number of observed samples and buffer sizeM . MIR ( Aljundi et al. , 2019a ) extends Reservoir sampling with a loss-based retrieval strategy , with the cost of additional forward passes and a model copy to attain the losses for a subset of samples . The Reservoir buffer population approximately follows the data stream distribution , severely deteriorating the performance of underrepresented tasks in imbalanced data streams , as shown in Section 6.2 . An alternative memory population scheme is used in GSS ( Aljundi et al. , 2019b ) by extending the GEM constraint optimization perspective to an instance-based level . GSS adds samples to the buffer based on their gradients , whereas GEM requires the number of tasks and the task transitions to divide memory equally over all tasks a priori . In contrast , iCaRL ’ s memory population is incrementally subdivided over all classes after learning a task , by iteratively adding observed samples from D to optimally approximate the class mean in feature space . As this is computationally expensive , concurrent works to ours explore other balancing schemes ( Kim et al. , 2020 ; Chrysakis & Moens , 2020 ) , where we propose a simple but effective class-based Reservoir scheme with uniform retrieval . Another branch of parameter isolation methods ( De Lange et al. , 2019 ) allocates parameters to subsets of the data . Several task incremental works assign parameters based on the task identifier ( Mallya & Lazebnik , 2018 ; Serra et al. , 2018 ) . A new line of work instead focuses on task-free model expansion . CURL ( Rao et al. , 2019 ) enables task-free and unsupervised adaptation using a multi-component variational auto-encoder , with generative replay from a model copy avoiding forgetting in the current model . CN-DPM ( Lee et al. , 2020 ) allocates data subsets to expert networks following a Dirichlet process mixture . In contrast to these capacity expansion based methods , CoPE evades unbound allocation of resources , as the memory and network capacity are fixed with the replay memory dynamically subdivided over categories occurring in the data stream . Note that new categories require an additional prototype , but these are only d-dimensional and therefore insignificant in size , and the set of categories is typically limited as well . Besides the focus on continual learning in this work , our learner-evaluator framework generalizes to concept drift as well ( Schlimmer & Granger , 1986 ) , for which we refer to an overview in ( Tsymbal , 2004 ; Gama et al. , 2014 ) . Further , in deep embedding learning most commonly pairs ( Hadsell et al. , 2006 ) and triplets ( Harwood et al. , 2017 ) of samples are considered in contrastive losses , whereas other works use batch information in lifted structure embeddings ( Oh Song et al. , 2016 ) , or instancewise softmax embeddings ( Ye et al. , 2019 ) . These approaches fully depend on the batch size , whereas our pseudo-prototypical proxy loss aggregates both decoupled prototypes and the additional batch pseudo-prototypes to defy class interference in the latent space . Learning prototypical representations also shows promising results in few-shot learning ( Snell et al. , 2017 ) .
This article introduces a learner-evaluator framework that incorporates the different variations of problems related to incremental learning. It also proposes a method called Continual Prototype Evolution for dealing with the most general version of the problem, incremental learning on data streams, in which the learning task is not specified. The paper presents an extensive amount of experiments indicating that in this scenario, the proposed method improves significantly on existing approaches in terms of accuracy and memory efficiency. The article is well organized, easy to read and understand.
SP:4045aeb245ca2e1341b85397e81090d9f99217cf
Continual Prototype Evolution: Learning Online from Non-Stationary Data Streams
1 INTRODUCTION . The prevalence of data streams in contemporary applications urges systems to learn in a continual fashion . Autonomous vehicles , sensory robot data , and video streaming yield never-ending streams of data , with abrupt changes in the observed environment behind every vehicle turn , robot entering a new room , or camera cut to a subsequent scene . Alas , learning from streaming data is far from trivial due to these changes , as neural networks tend to forget the knowledge they previously acquired . The data stream presented to the network is not identically and independently distributed ( iid ) , emanating a trade-off between neural stability to retain the current state of knowledge and neural plasticity to swiftly adopt the new knowledge ( Grossberg , 1982 ) . Finding the balance in this stability-plasticity dilemma addresses the catastrophic forgetting ( French , 1999 ) induced by the non-iid intrinsics of the data stream , and is considered the main hurdle for continually learning systems . Although a lot of progress has been established in the literature , often strong assumptions apply , impeding applicability for real-world systems . The static training and testing paradigms prevail , whereas a true continual learner should enable both simultaneously and independently . Therefore , we propose the two-agent learner-evaluator framework to redefine perspective on existing paradigms in the field . Within this framework , we introduce data incremental learning , enabling completely task-free learning and evaluation . Furthermore , we introduce Continual Prototype Evolution ( CoPE ) , a new online data incremental learner wherein prototypes perpetually represent the most salient features of the class population , shifting the catastrophic forgetting problem from the full network parameter space to the lowerdimensional latent space . As a first , our prototypes evolve continually with the data stream , enabling learning and evaluation at any point in time . Similar to representativeness heuristics in human cognition ( Kahneman & Tversky , 1972 ) , the class prototypes are the cornerstone for nearest neighbor classification . Additionally , the system is robust to highly imbalanced data streams by the combination of replay with a balancing memory population scheme . We find batch information in the latent space to have a significant advantage in the challenging non-stationary and online processing regime , which we incorporate in the novel pseudo-prototypical proxy loss . 2 THE LEARNER-EVALUATOR FRAMEWORK . To date , the paradigms of task , class , and domain incremental learning ( van de Ven & Tolias , 2018 ) dominate the continual learning literature . However , strong and differing assumptions often lead to confusion and overlap between implementations of these definitions . Furthermore , the concept of a static training and testing phase is still ubiquitous , whereas continual learning systems should enable both phases continually and independently . Therefore , we propose a generalizing framework which disentangles the continually learning system into two agents : the learner and the evaluator . Figure 1 presents an overview of the framework . The learning agent learns predicting function fθ : X → Y parameterized by θ , mapping the input space X to the target output space Y . The learner receives data samples ( xi , yi ) from stream S and has simultaneous access to the horizon D , i.e . the observable subset of stream S which can be processed for multiple iterations . Data sample i is constituted by input feature xi ∈ X and corresponding ( self- ) supervision signal yi for which the output space for classification is defined as a discrete set of observed classes Yi ← Yi−1 ∪ { yi } . To manage memory usage and to enable multiple updates and stochasticity in the optimization process , updates for θ are typically performed based on a small-scale processing batch B ⊆ D. The data and size of the horizon D are determined by the specific setup or application , ranging from standard offline learning with D = S to online continual learning with D = B . Furthermore , the learner might need additional resources after observing data from B ⊆ D , such as stored samples or model copies , confined by the operational memoryM . The evaluating agent acts independently from the learner by evaluating fθ with horizon Deval from the evaluation stream Seval , with small-scale processing batches Beval ⊆ Deval . This stream can contain yet unobserved concepts by the learner in S to measure zero-shot performance . The framework provides leeway for the concept distributions in Seval being either static or dynamically evolving , determining how performance of the learner is measured . On the one hand , static concept distributions can measure the degree to which the knowledge of learned concepts is preserved , as commonly used in continual learning . On the other hand , evolving concept distributions measure performance for the current distribution in horizon Deval only , where concepts might drift from their original representation , also known as concept drift ( Schlimmer & Granger , 1986 ) . Evaluation can occur asynchronously on-demand or periodically with periodicity ρ determining the resolution of the evaluation samples . Task , class , and domain incremental learning are based on the composition in the learner for the observable stream subset in horizon Dt , which is incrementally replaced by a new subset of data for the new task , set of classes , or domain , with t the identifier of the present data subset . Task incremental learning assumes both learner and evaluator to get data ( xi , yi , ti ) with ti+1 ≥ ti and the horizon spanning all data of a given task with Dt = { ( xi , yi , ti ) ∈ S | ti = t } ( De Lange et al. , 2019 ; van de Ven & Tolias , 2019 ) . Having explicit access to ti confines prediction to an isolated output space . Similarly , in class incremental learning the learner implicitly requires ti to identify the transitions of D , when observing new batches of classes ( Rebuffi et al. , 2017 ; Castro et al. , 2018 ; Shmelkov et al. , 2017 ; Wu et al. , 2018 ) . However , the evaluator considers the entire output space without the need for identifier t. Domain incremental learning holds the same assumptions as class incremental learning , with concepts drifting from one domain to the other for a typically fixed output space , exemplified by the widely used permuted-MNIST setup ( Goodfellow et al. , 2013 ) . Data incremental learning is a more general paradigm we introduce to facilitate learning from any data stream , with no assumption but to observe data incrementally . In contrast to existing paradigms , when the learner observes horizon D of data stream S , data incremental learning does not disclose an identifier t. Consequently , there is no explicit indication to which subset of the stream is being observed in the horizon D. Therefore , the learner either processes observed data directly in an online fashion with processing batch B = D , or infers an implicit identifier t from statistics in stream S. Similar to class and domain incremental learning , the evaluator operates without t on the full output space . This paradigm endows continually learning systems with increased practical use , as real-world streaming applications often lack supervision signal t. Moreover , even if t is provided , this would introduce a bias in the fixed choice of the supervisor , rather than dynamically determined based on the needs of the system . 3 PRIOR WORK . Continually learning systems are able to learn with limited resources from data streams prone to severe distribution shifts . The main body of works presumes the presence of tasks , which divide the data streams into large discrete batches , and are indicated to the learner with a task identifier ( Kirkpatrick et al. , 2017 ; Li & Hoiem , 2017 ; Zenke et al. , 2017 ; Aljundi et al. , 2018 ; De Lange et al. , 2020 ) . Replay methods retain representative data for observed data distributions , currently unavailable in the learner ’ s horizon D. The replay data is either obtained directly from operational memoryM with stored samples ( Rebuffi et al. , 2017 ; Lopez-Paz & Ranzato , 2017 ) or generated using generative models ( Shin et al. , 2017 ; Kamra et al. , 2017 ; Seff et al. , 2017 ; Wu et al. , 2018 ) . GEM ( Lopez-Paz & Ranzato , 2017 ) uses replay in a constraint optimization perspective to project gradients towards a local joint task optimum . iCaRL ( Rebuffi et al. , 2017 ) employs exemplars to distill knowledge ( Hinton et al. , 2015 ) to the learner from a previous model version , with new class exemplars stored in a queue to optimally represent the class mean in feature space . The prototypes are then used for nearest neighbor prediction by the evaluator , in the same vein as concurrent work to ours ( Han et al. , 2020 ) . Nonetheless , all three works strongly rely on task identifier t for the learner , mostly unavailable for real-world data streams . Moreover , in both prototypical approaches the prototypes remain static between the given task transitions and become outdated . Consequently , before using the evaluator they have to exhaustively recalculate the prototypes based on all exemplars in memory . In contrast , our prototypes evolve in an online fashion with the data stream and remain representative for the continual learner and evaluator at all times . Recent works focus on online data incremental learning ( Section 2 ) in which the learner operates completely task-free . Reservoir ( Vitter , 1985 ) is a replay baseline with strong potential to outperform continual learning methods ( Chaudhry et al. , 2019 ) . Samples are stored in memoryMwith probability M/n , with n the number of observed samples and buffer sizeM . MIR ( Aljundi et al. , 2019a ) extends Reservoir sampling with a loss-based retrieval strategy , with the cost of additional forward passes and a model copy to attain the losses for a subset of samples . The Reservoir buffer population approximately follows the data stream distribution , severely deteriorating the performance of underrepresented tasks in imbalanced data streams , as shown in Section 6.2 . An alternative memory population scheme is used in GSS ( Aljundi et al. , 2019b ) by extending the GEM constraint optimization perspective to an instance-based level . GSS adds samples to the buffer based on their gradients , whereas GEM requires the number of tasks and the task transitions to divide memory equally over all tasks a priori . In contrast , iCaRL ’ s memory population is incrementally subdivided over all classes after learning a task , by iteratively adding observed samples from D to optimally approximate the class mean in feature space . As this is computationally expensive , concurrent works to ours explore other balancing schemes ( Kim et al. , 2020 ; Chrysakis & Moens , 2020 ) , where we propose a simple but effective class-based Reservoir scheme with uniform retrieval . Another branch of parameter isolation methods ( De Lange et al. , 2019 ) allocates parameters to subsets of the data . Several task incremental works assign parameters based on the task identifier ( Mallya & Lazebnik , 2018 ; Serra et al. , 2018 ) . A new line of work instead focuses on task-free model expansion . CURL ( Rao et al. , 2019 ) enables task-free and unsupervised adaptation using a multi-component variational auto-encoder , with generative replay from a model copy avoiding forgetting in the current model . CN-DPM ( Lee et al. , 2020 ) allocates data subsets to expert networks following a Dirichlet process mixture . In contrast to these capacity expansion based methods , CoPE evades unbound allocation of resources , as the memory and network capacity are fixed with the replay memory dynamically subdivided over categories occurring in the data stream . Note that new categories require an additional prototype , but these are only d-dimensional and therefore insignificant in size , and the set of categories is typically limited as well . Besides the focus on continual learning in this work , our learner-evaluator framework generalizes to concept drift as well ( Schlimmer & Granger , 1986 ) , for which we refer to an overview in ( Tsymbal , 2004 ; Gama et al. , 2014 ) . Further , in deep embedding learning most commonly pairs ( Hadsell et al. , 2006 ) and triplets ( Harwood et al. , 2017 ) of samples are considered in contrastive losses , whereas other works use batch information in lifted structure embeddings ( Oh Song et al. , 2016 ) , or instancewise softmax embeddings ( Ye et al. , 2019 ) . These approaches fully depend on the batch size , whereas our pseudo-prototypical proxy loss aggregates both decoupled prototypes and the additional batch pseudo-prototypes to defy class interference in the latent space . Learning prototypical representations also shows promising results in few-shot learning ( Snell et al. , 2017 ) .
This paper covers an interesting topic of continual learning of the stream of data. One limitation of the existing classification algorithms is their close-set assumption. In close-set methods, a predefined set of classes are considered and a model is trained on the available data from these classes, based on the assumption that test data will be driven from a similar distribution as the training data. However, most of the real-world problems are open-set problems. Open-set models should be able to learn continuously in an online manner with minimum or zero supervision. In other words, they should be able to learn new classes or update the existing classes based on the received new data on-the-fly, without forgetting the previously learned knowledge.
SP:4045aeb245ca2e1341b85397e81090d9f99217cf
Empirically Verifying Hypotheses Using Reinforcement Learning
This paper formulates hypothesis verification as an RL problem . Specifically , we aim to build an agent that , given a hypothesis about the dynamics of the world , can take actions to generate observations which can help predict whether the hypothesis is true or false . Existing RL algorithms fail to solve this task , even for simple environments . We formally define this problem , and develop environments to test different algorithms ’ performance . We analyze methods which are given additional pre-training rewards and find that the most effective of these is one that exploits the underlying structure of many hypotheses , factorizing them as { pre-condition , action sequence , post-condition } triplets . By leveraging this structure we show that RL agents are able to succeed . Furthermore , subsequent fine-tuning of the policies allows the agent to correctly verify hypotheses not amenable to this factorization . 1 INTRODUCTION . Empirical research on early learning Gopnik ( 2012 ) ; Kushnir & Gopnik ( 2005 ) shows that infants build an understanding of the world by constantly formulating hypotheses about how some physical aspect of the world might work and then proving or disproving them through deliberate play . Through this process the child builds up a consistent causal understanding of the world . This contrasts with manner in which current ML systems operate . Both traditional i.i.d and interactive learning settings use a single user-specified objective function that codifies a high-level task , and the optimization routine finds the set of parameters ( weights ) which maximizes performance on the task . The learned representation ( knowledge of how the world works ) is embedded in the weights of the model - which makes it harder to inspect , hypothesize or even enforce domain constraints that might exist . On the other hand , hypothesis generation and testing is a process explored in classical approaches to AI Brachman & Levesque ( 2004 ) . In this paper we take a modest step towards the classical AI problem of building agents capable of testing hypotheses about its world using modern ML approaches . The problem we address is illustrated in Figure 1 . Agents are placed in a world which has several interactive elements . They are provided with a hypothesis ( an `` action sentence '' Pearl ( 2009 ) ) about the underlying mechanics of the world via a text string ( e.g . `` A will be true if we do B '' ) . The task is to determine if the hypothesis is true or not . This problem can not be solved without interaction with a dynamic world ( comparing the state before and after taking action B ) . A key novelty in our work is formulating the task in a manner that permits the application of modern RL methods , allowing raw state observations to be used rather than abstract Boolean expressions of events . To do this , we use a model composed of two different deep parametric functions which are learned through interaction : ( i ) a policy that generates observations relevant to verification of the hypothesis and ( ii ) a prediction function which uses the observations to predict whether it is true . We first show that agents trained end-to-end using deep RL can not learn policies that can generate observations to verify the hypothesis . To remedy this , we exploit the underlying structure of hypotheses – they can often be formulated as a triplet of a pre-condition ( P ) , an action sequence ( collectively B ) , and a post-condition ( A ) that is causally related to the pre-condition and actions . Using this structure , we can seed our action policy to learn behaviors which alter the truth of the pre-condition and post-condition . This allows agents to learn policies that can generate meaningful observations for training the prediction function . We further demonstrate that these policies can be adapted to learn to verify more general hypotheses that do not necessarily fit into the triplet structure . Our experiments Hypothesis : “ when you are at craftingtable and you have stick and then you craft then torch is made ” show that this approach outperforms naive RL and several flavors of intrinsic motivation designed to encourage the agent to interact with the objects of interest . 2 RELATED WORK . Knowledge representation and reasoning ( KRR ) Brachman & Levesque ( 2004 ) is a central theme of traditional AI . Commonsense reasoning Davis ( 1990 ) ; Davis & Marcus ( 2015 ) ; Liu & Singh ( 2004 ) approaches , e.g . CYC Lenat ( 1995 ) , codify everyday knowledge into a schema that permits inference and question answering . However , the underlying operations are logic-based and occur purely within the structured representation , having no mechanism for interaction with an external world . Expert systems Giarratano & Riley ( 1998 ) instead focus on narrow domains of knowledge , but are similarly self-contained . Logic-based planning methods Fikes & Nilsson ( 1971 ) ; Colaco & Sridharan ( 2015 ) generate abstract plans that could be regarded as action sequences for an agent . By contrast , our approach is statistical in nature , relying on Reinforcement Learning ( RL ) to guide the agent . Our approach builds on the recent interest Mao et al . ( 2019 ) ; Garcez et al . ( 2012 ) in neural-symbolic approaches that combine neural networks with symbolic representations . In particular , some recent works Zhang & Stone ( 2015 ) ; Lu et al . ( 2018 ) have attempted to combine RL with KRR , for tasks such as navigation and dialogue . These take the world dynamics learned by RL and make them usable in declarative form within the knowledge base , which is then used to improve the underlying RL policy . In contrast , in our approach , the role of RL is to verify a formal statement about the world . Our work also shares some similarity with Konidaris et al . ( 2018 ) , where ML methods are used to learn mappings from world states to representations a planner can use . Causality and RL : There are now extensive and sophisticated formalizations of ( statistical ) causality Pearl ( 2009 ) . These provide a framework for an agent to draw conclusions about its world , and verify hypothesis as in this work . This is the approach taken in Dasgupta et al . ( 2019 ) , where RL is used to train an agent that operates directly on a causal Bayesian network ( CBN ) in order to predict the results of interventions on the values on its nodes . In contrast , the approach in this work is to sidestep this formalization with the hope of training agents who test hypotheses without building explicit CBNs . Unlike Dasgupta et al . ( 2019 ) , our agents intervene on the actual world ( where interventions may take many actions ) , rather than the abstract CBN . Nevertheless , we find that it is necessary to add inductive bias to the training of the agent ; here we use the pretraining on ( P , B , A ) triplets . These approaches are complementary ; one could combine explicit generation and analysis of CBNs as an abstract representation of an environment with our training protocols . Our work is thus most similar to Denil et al . ( 2016 ) , which uses RL directly on the world , and the agent gets reward for answering questions that require experimentation . However , in that work ( and in Dasgupta et al . ( 2019 ) ) , the “ question ” in each world is the same ; and thus while learning to interact led to higher answer accuracy , random experimental policies could still find correct answers . On the other hand , in this work , the space of questions possible for any given world is combinatorial , and random experimentation ( and indeed vanilla RL ) is insufficient to answer questions . Cognitive development : Empirical research on early learning Gopnik ( 2012 ) ; Kushnir & Gopnik ( 2005 ) shows infants build an understanding of the world in ways that parallel the scientific process : constantly formulating hypotheses about how some physical aspect of the world might work and then proving or disproving them through deliberate play . Through this process the child builds up an abstract consistent causal understanding of the world . Violations of this understanding elicit measurable surprise Spelke et al . ( 1992 ) . Automated Knowledge Base completion : This work is also related to knowledge base completion Fader et al . ( 2011 ) ; Bordes et al . ( 2013 ) ; Suchanek et al . ( 2007 ) , especially as formulated in Riedel et al . ( 2013 ) . Instead of using facts in the knowledge base or a text corpus to predict edges , here the agent acts in a world and observes the results of its actions . This recalls Mitchell et al . ( 2018 ) , where the system verifies facts it had hypothesized by searching for corroboration in the corpus . Automation of the scientific process : has been tried in several domains . Robotic exploration of chemical reactivity was demonstrated Granda et al . ( 2018 ) using ML techniques . King et al . ( 2009 ) developed a robot scientist that explored geonomics hypotheses about yeast and experimentally tested them using laboratory automation . In biochemistry Vanlier et al . ( 2014 ) used Bayesian methods for optimal experiment design . More generally , the Automated Statistician project Steinruecken et al . ( 2019 ) uses a Bayesian approach to reason about different hypotheses for explaining the data , with the aim of creating interpretable knowledge . Embodied Question and Answering : The problem studied in this paper is closely related to the embodied visual question-answering problem in Das et al . ( 2018 ) . Indeed , our basic formulation is a particular case of the most general formulation of embodied QA , as the agent is rewarded for successfully answering questions about the world that require interaction . However , the form of the questions is different than those considered in that work , as they may require drawing a conclusion about the dynamics of the world , rather than a static property . Even the questions about static properties we are interested in have a different flavor , as they encode rules , rather than statements about the current configuration . Our approach is built around hypothesis-conclusion structure special to these questions . There is also a large body of work on visual QA Kafle & Kanan ( 2017 ) ; Wu et al . ( 2016a ) and text-based QA Rajpurkar et al . ( 2018 ) . From this , most relevant to our work is Wu et al . ( 2016b ) who use a structured knowledge base to augment standard QA techniques . 3 THE HYPOTHESIS VERIFICATION PROBLEM . An agent is spawned in a world sampled from a distribution over possible worlds . In the case of “ Crafting ” , shown in Figure 1 , there are items lying around that the agent can pick up and combine using a “ craft ” action . The exact dynamics change for every newly instantiated world ; so in one world , taking a craft action with a stick might produce a torch , and in another , it might produce a pickaxe . At the start of each episode , the agent is given a hypothesis about the world , such as the one shown at the top of Figure 1 . The agent gets a reward when it correctly answers if that hypothesis is true or false . Because the dynamics and rules change each episode , the agent must learn to interact with the world in order to decide if the hypothesis is true . In Figure 1 the agent picks up the stick and does a craft action to see that a torch is created . It then has enough information to decide the hypothesis is true , and the agent receives reward for verifying the hypothesis correctly . In this work , we will structure our hypotheses using templated language . One could imagine using more expansive formal symbolic systems ( e.g . first order logic ) , or alternatively , using natural language descriptions of the hypotheses . The former might allow interfacing with symbolic solvers or otherwise using combinatorial approaches ; whereas the latter would allow scaling annotation to untrained humans . We choose templated language because it is simple , and sufficient for the environments on which we test , which are already challenging for standard RL . Moreover , in our view it is a good starting point for further work that would use either more sophisticated formal representations or more natural language representations . Formal Definition We define a world as a set of states and actions with Markovian dynamics ( an MDP without reward ) . We define an environment E as a distribution over a set of worldsW and hypothesesH . A world W ∈ W is specified by rules LW describing the dynamics of the world . We define this reward-less MDP of one specific world W as MDPW = { SW , AW , TW } where state space SW includes the position and state of objects in the world ( e.g . the placement of the agents and the object ) , AW is the action space , and TW is the transition function . Note that TW depends on LW , the rules of this specific world . Actions have different consequences depending on LW . Now E is an episodic POMDP where each episode consists of sampling1 a W and h. ( G is a groundtruth function that takes in the hypothesis h and worldW and outputs { true , false } . In this work , hypotheses are generated via templated language and their truth function G depends on W , more specifically LW . The episode ends when the agent executes either the true or false action . 1See Appendix B for details on sampling procedures Given a world W and hypothesis h , an agent gets reward : RHyp = { +1 a = G ( h , W ) −1 a = ¬G ( h , W ) 0 otherwise The observation in this POMDP is o = ( sW , h ) , the state from the world W plus the hypothesis . The state is s = ( sW , h , LW ) . This includes the rule LW which is not visible in the observation . The action space is just AW ∪ { true , false } for any W ( they are the same for a given environment ) ; and T = TW . Note that the transition function T depends on the ( hidden ) LW . The goal of hypothesis verification is now to discover the truth of h , which depends on LW .
This paper introduces a problem setting where an RL agent must interact with its environment to predict whether a given hypothesis is true or false. On modified versions of environments like gridworld and cartpole, they show that PPO with a sparse reward is unable to correctly test the hypothesis. The key technical contribution is making the reward more dense by assuming a predefined structure in a subset of hypotheses. This denser reward is used to pretrain the RL policy, which is then finetuned over the rest of the hypotheses using sparse reward.
SP:457e8a56292b7c3001e0bae17c86b978ab95bcce
Empirically Verifying Hypotheses Using Reinforcement Learning
This paper formulates hypothesis verification as an RL problem . Specifically , we aim to build an agent that , given a hypothesis about the dynamics of the world , can take actions to generate observations which can help predict whether the hypothesis is true or false . Existing RL algorithms fail to solve this task , even for simple environments . We formally define this problem , and develop environments to test different algorithms ’ performance . We analyze methods which are given additional pre-training rewards and find that the most effective of these is one that exploits the underlying structure of many hypotheses , factorizing them as { pre-condition , action sequence , post-condition } triplets . By leveraging this structure we show that RL agents are able to succeed . Furthermore , subsequent fine-tuning of the policies allows the agent to correctly verify hypotheses not amenable to this factorization . 1 INTRODUCTION . Empirical research on early learning Gopnik ( 2012 ) ; Kushnir & Gopnik ( 2005 ) shows that infants build an understanding of the world by constantly formulating hypotheses about how some physical aspect of the world might work and then proving or disproving them through deliberate play . Through this process the child builds up a consistent causal understanding of the world . This contrasts with manner in which current ML systems operate . Both traditional i.i.d and interactive learning settings use a single user-specified objective function that codifies a high-level task , and the optimization routine finds the set of parameters ( weights ) which maximizes performance on the task . The learned representation ( knowledge of how the world works ) is embedded in the weights of the model - which makes it harder to inspect , hypothesize or even enforce domain constraints that might exist . On the other hand , hypothesis generation and testing is a process explored in classical approaches to AI Brachman & Levesque ( 2004 ) . In this paper we take a modest step towards the classical AI problem of building agents capable of testing hypotheses about its world using modern ML approaches . The problem we address is illustrated in Figure 1 . Agents are placed in a world which has several interactive elements . They are provided with a hypothesis ( an `` action sentence '' Pearl ( 2009 ) ) about the underlying mechanics of the world via a text string ( e.g . `` A will be true if we do B '' ) . The task is to determine if the hypothesis is true or not . This problem can not be solved without interaction with a dynamic world ( comparing the state before and after taking action B ) . A key novelty in our work is formulating the task in a manner that permits the application of modern RL methods , allowing raw state observations to be used rather than abstract Boolean expressions of events . To do this , we use a model composed of two different deep parametric functions which are learned through interaction : ( i ) a policy that generates observations relevant to verification of the hypothesis and ( ii ) a prediction function which uses the observations to predict whether it is true . We first show that agents trained end-to-end using deep RL can not learn policies that can generate observations to verify the hypothesis . To remedy this , we exploit the underlying structure of hypotheses – they can often be formulated as a triplet of a pre-condition ( P ) , an action sequence ( collectively B ) , and a post-condition ( A ) that is causally related to the pre-condition and actions . Using this structure , we can seed our action policy to learn behaviors which alter the truth of the pre-condition and post-condition . This allows agents to learn policies that can generate meaningful observations for training the prediction function . We further demonstrate that these policies can be adapted to learn to verify more general hypotheses that do not necessarily fit into the triplet structure . Our experiments Hypothesis : “ when you are at craftingtable and you have stick and then you craft then torch is made ” show that this approach outperforms naive RL and several flavors of intrinsic motivation designed to encourage the agent to interact with the objects of interest . 2 RELATED WORK . Knowledge representation and reasoning ( KRR ) Brachman & Levesque ( 2004 ) is a central theme of traditional AI . Commonsense reasoning Davis ( 1990 ) ; Davis & Marcus ( 2015 ) ; Liu & Singh ( 2004 ) approaches , e.g . CYC Lenat ( 1995 ) , codify everyday knowledge into a schema that permits inference and question answering . However , the underlying operations are logic-based and occur purely within the structured representation , having no mechanism for interaction with an external world . Expert systems Giarratano & Riley ( 1998 ) instead focus on narrow domains of knowledge , but are similarly self-contained . Logic-based planning methods Fikes & Nilsson ( 1971 ) ; Colaco & Sridharan ( 2015 ) generate abstract plans that could be regarded as action sequences for an agent . By contrast , our approach is statistical in nature , relying on Reinforcement Learning ( RL ) to guide the agent . Our approach builds on the recent interest Mao et al . ( 2019 ) ; Garcez et al . ( 2012 ) in neural-symbolic approaches that combine neural networks with symbolic representations . In particular , some recent works Zhang & Stone ( 2015 ) ; Lu et al . ( 2018 ) have attempted to combine RL with KRR , for tasks such as navigation and dialogue . These take the world dynamics learned by RL and make them usable in declarative form within the knowledge base , which is then used to improve the underlying RL policy . In contrast , in our approach , the role of RL is to verify a formal statement about the world . Our work also shares some similarity with Konidaris et al . ( 2018 ) , where ML methods are used to learn mappings from world states to representations a planner can use . Causality and RL : There are now extensive and sophisticated formalizations of ( statistical ) causality Pearl ( 2009 ) . These provide a framework for an agent to draw conclusions about its world , and verify hypothesis as in this work . This is the approach taken in Dasgupta et al . ( 2019 ) , where RL is used to train an agent that operates directly on a causal Bayesian network ( CBN ) in order to predict the results of interventions on the values on its nodes . In contrast , the approach in this work is to sidestep this formalization with the hope of training agents who test hypotheses without building explicit CBNs . Unlike Dasgupta et al . ( 2019 ) , our agents intervene on the actual world ( where interventions may take many actions ) , rather than the abstract CBN . Nevertheless , we find that it is necessary to add inductive bias to the training of the agent ; here we use the pretraining on ( P , B , A ) triplets . These approaches are complementary ; one could combine explicit generation and analysis of CBNs as an abstract representation of an environment with our training protocols . Our work is thus most similar to Denil et al . ( 2016 ) , which uses RL directly on the world , and the agent gets reward for answering questions that require experimentation . However , in that work ( and in Dasgupta et al . ( 2019 ) ) , the “ question ” in each world is the same ; and thus while learning to interact led to higher answer accuracy , random experimental policies could still find correct answers . On the other hand , in this work , the space of questions possible for any given world is combinatorial , and random experimentation ( and indeed vanilla RL ) is insufficient to answer questions . Cognitive development : Empirical research on early learning Gopnik ( 2012 ) ; Kushnir & Gopnik ( 2005 ) shows infants build an understanding of the world in ways that parallel the scientific process : constantly formulating hypotheses about how some physical aspect of the world might work and then proving or disproving them through deliberate play . Through this process the child builds up an abstract consistent causal understanding of the world . Violations of this understanding elicit measurable surprise Spelke et al . ( 1992 ) . Automated Knowledge Base completion : This work is also related to knowledge base completion Fader et al . ( 2011 ) ; Bordes et al . ( 2013 ) ; Suchanek et al . ( 2007 ) , especially as formulated in Riedel et al . ( 2013 ) . Instead of using facts in the knowledge base or a text corpus to predict edges , here the agent acts in a world and observes the results of its actions . This recalls Mitchell et al . ( 2018 ) , where the system verifies facts it had hypothesized by searching for corroboration in the corpus . Automation of the scientific process : has been tried in several domains . Robotic exploration of chemical reactivity was demonstrated Granda et al . ( 2018 ) using ML techniques . King et al . ( 2009 ) developed a robot scientist that explored geonomics hypotheses about yeast and experimentally tested them using laboratory automation . In biochemistry Vanlier et al . ( 2014 ) used Bayesian methods for optimal experiment design . More generally , the Automated Statistician project Steinruecken et al . ( 2019 ) uses a Bayesian approach to reason about different hypotheses for explaining the data , with the aim of creating interpretable knowledge . Embodied Question and Answering : The problem studied in this paper is closely related to the embodied visual question-answering problem in Das et al . ( 2018 ) . Indeed , our basic formulation is a particular case of the most general formulation of embodied QA , as the agent is rewarded for successfully answering questions about the world that require interaction . However , the form of the questions is different than those considered in that work , as they may require drawing a conclusion about the dynamics of the world , rather than a static property . Even the questions about static properties we are interested in have a different flavor , as they encode rules , rather than statements about the current configuration . Our approach is built around hypothesis-conclusion structure special to these questions . There is also a large body of work on visual QA Kafle & Kanan ( 2017 ) ; Wu et al . ( 2016a ) and text-based QA Rajpurkar et al . ( 2018 ) . From this , most relevant to our work is Wu et al . ( 2016b ) who use a structured knowledge base to augment standard QA techniques . 3 THE HYPOTHESIS VERIFICATION PROBLEM . An agent is spawned in a world sampled from a distribution over possible worlds . In the case of “ Crafting ” , shown in Figure 1 , there are items lying around that the agent can pick up and combine using a “ craft ” action . The exact dynamics change for every newly instantiated world ; so in one world , taking a craft action with a stick might produce a torch , and in another , it might produce a pickaxe . At the start of each episode , the agent is given a hypothesis about the world , such as the one shown at the top of Figure 1 . The agent gets a reward when it correctly answers if that hypothesis is true or false . Because the dynamics and rules change each episode , the agent must learn to interact with the world in order to decide if the hypothesis is true . In Figure 1 the agent picks up the stick and does a craft action to see that a torch is created . It then has enough information to decide the hypothesis is true , and the agent receives reward for verifying the hypothesis correctly . In this work , we will structure our hypotheses using templated language . One could imagine using more expansive formal symbolic systems ( e.g . first order logic ) , or alternatively , using natural language descriptions of the hypotheses . The former might allow interfacing with symbolic solvers or otherwise using combinatorial approaches ; whereas the latter would allow scaling annotation to untrained humans . We choose templated language because it is simple , and sufficient for the environments on which we test , which are already challenging for standard RL . Moreover , in our view it is a good starting point for further work that would use either more sophisticated formal representations or more natural language representations . Formal Definition We define a world as a set of states and actions with Markovian dynamics ( an MDP without reward ) . We define an environment E as a distribution over a set of worldsW and hypothesesH . A world W ∈ W is specified by rules LW describing the dynamics of the world . We define this reward-less MDP of one specific world W as MDPW = { SW , AW , TW } where state space SW includes the position and state of objects in the world ( e.g . the placement of the agents and the object ) , AW is the action space , and TW is the transition function . Note that TW depends on LW , the rules of this specific world . Actions have different consequences depending on LW . Now E is an episodic POMDP where each episode consists of sampling1 a W and h. ( G is a groundtruth function that takes in the hypothesis h and worldW and outputs { true , false } . In this work , hypotheses are generated via templated language and their truth function G depends on W , more specifically LW . The episode ends when the agent executes either the true or false action . 1See Appendix B for details on sampling procedures Given a world W and hypothesis h , an agent gets reward : RHyp = { +1 a = G ( h , W ) −1 a = ¬G ( h , W ) 0 otherwise The observation in this POMDP is o = ( sW , h ) , the state from the world W plus the hypothesis . The state is s = ( sW , h , LW ) . This includes the rule LW which is not visible in the observation . The action space is just AW ∪ { true , false } for any W ( they are the same for a given environment ) ; and T = TW . Note that the transition function T depends on the ( hidden ) LW . The goal of hypothesis verification is now to discover the truth of h , which depends on LW .
This paper considers the general problem of testing hypotheses about the world by a kind of reinforcement learning, much as a person might learn by taking actions and observing their outcomes -- equivalently learning policies that can generate observations to validate a hypothesis. A hypothesis is a symbolic representation of precondition, action and effects. The paper takes advantage of the ability of reinforcement learning to manipulate the environment that makes it possible to make causal inferences about the world.
SP:457e8a56292b7c3001e0bae17c86b978ab95bcce
Learning with Feature-Dependent Label Noise: A Progressive Approach
1 INTRODUCTION . Addressing noise in training set labels is an important problem in supervised learning . Incorrect annotation of data is inevitable in large-scale data collection , due to intrinsic ambiguity of data/class and mistakes of human/automatic annotators ( Yan et al. , 2014 ; Andreas et al. , 2017 ) . Developing methods that are resilient to label noise is therefore crucial in real-life applications . Classical approaches take a rather simplistic i.i.d . assumption on the label noise , i.e. , the label corruption is independent and identically distributed and thus is feature-independent . Methods based on this assumption either explicitly estimate the noise pattern ( Reed et al. , 2014 ; Patrini et al. , 2017 ; Dan et al. , 2019 ; Xu et al. , 2019 ) or introduce extra regularizer/loss terms ( Natarajan et al. , 2013 ; Van Rooyen et al. , 2015 ; Xiao et al. , 2015 ; Zhang & Sabuncu , 2018 ; Ma et al. , 2018 ; Arazo et al. , 2019 ; Shen & Sanghavi , 2019 ) . Some results prove that the commonly used losses are naturally robust against such i.i.d . label noise ( Manwani & Sastry , 2013 ; Ghosh et al. , 2015 ; Gao et al. , 2016 ; Ghosh et al. , 2017 ; Charoenphakdee et al. , 2019 ; Hu et al. , 2020 ) . Although these methods come with theoretical guarantees , they usually do not perform as well as expected in practice due to the unrealistic i.i.d . assumption on noise . This is likely because label noise is heterogeneous and feature-dependent . A cat with an intrinsically ambiguous appearance is more likely to be mislabeled as a dog . An image with poor lighting or severe occlusion can be mislabeled , as important visual clues are imperceptible . Methods that can combat label noise of a much more general form are very much needed to address real-world challenges . To adapt to the heterogeneous label noise , state-of-the-arts ( SOTAs ) often resort to a data-recalibrating strategy . They progressively identify trustworthy data or correct data labels , and then train using these data ( Tanaka et al. , 2018 ; Wang et al. , 2018 ; Lu et al. , 2018 ; Li et al. , 2019 ) . The models gradually improve as more clean data are collected or more labels are corrected , eventually converging to models of high accuracy . These data-recalibrating methods best leverage the learning power of deep neural nets and achieve superior performance in practice . However , their underlying mechanism remains a mystery . No methods in this category can provide theoretical insights as to why the model ∗Equal contributions . can converge to an ideal one . Thus , these methods require careful hyperparameter tuning and are hard to generalize . In this paper , we propose a novel and principled method that specifically targets the heterogeneous , feature-dependent label noise . Unlike previous methods , we target a much more general family of noise , called Polynomial Margin Diminishing ( PMD ) label noise . In this noise family , we allow arbitrary noise level except for data far away from the true decision boundary . This is consistent with the real-world scenario ; data near the decision boundary are harder to distinguish and more likely to be mislabeled . Meanwhile , a datum far away from the decision boundary is a typical example of its true class and should have a reasonably bounded noise level . Assuming this new PMD noise family , we propose a theoretically-guaranteed data-recalibrating algorithm that gradually corrects labels based on the noisy classifier ’ s confidence . We start from data points with high confidence , and correct the labels of these data using the predictions of the noisy classifier . Next , the model is improved using cleaned labels . We continue alternating the label correction and model improvement until it converges . See Figure 1 for an illustration . Our main theorem shows that with a theory-informed criterion for label correction at each iteration , the improvement of the label purity is guaranteed . Thus the model is guaranteed to improve with sufficient rate through iterations and eventually becomes consistent with the Bayes optimal classifier . Beside the theoretical strength , we also demonstrate the power of our method in practice . Our method outperforms others on CIFAR-10/100 with various synthetic noise patterns . We also evaluate our method against SOTAs on three real-world datasets with unknown noise patterns . To the best of our knowledge , our method is the first data-recalibrating method that is theoretically guaranteed to converge to an ideal model . The PMD noise family encompasses a broad spectrum of heterogeneous and feature-dependent noise , and better approximates the real-world scenario . It also provides a novel theoretical setting for the study of label noise . Related works . We review works that do not assume an i.i.d . label noise . Menon et al . ( 2018 ) generalized the work of ( Ghosh et al. , 2015 ) and provided an elegant theoretical framework , showing that loss functions fulfilling certain conditions naturally resist instance-dependent noise . The method can achieve even better theoretical properties ( i.e. , Bayes-consistency ) with stronger assumption on the clean posterior probability η . In practice , this method has not been extended to deep neural networks . Cheng et al . ( 2020 ) proposed an active learning method for instance-dependent label noise . The algorithm iteratively queries clean labels from an oracle on carefully selected data . However , this approach is not applicable to settings where kosher annotations are unavailable . Another contemporary work ( Chen et al. , 2021 ) showed that the noise in real-world dataset is unlikely to be i.i.d. , and proposed to fix the noisy labels by averaging the network predictions on each instance over the whole training process . While being effective , their method lacks theoretical guarantees . Chen et al . ( 2019 ) showed by regulating the topology of a classifier ’ s decision boundary , one can improve the model ’ s robustness against label noise . Data-recalibrating methods use noisy networks ’ predictions to iteratively select/correct data and improve the models . Tanaka et al . ( 2018 ) introduced a joint training framework which simultaneously enforces the network to be consistent with its own predictions and corrects the noisy labels during training . Wang et al . ( 2018 ) identified noisy labels as outliers based on their label consistencies with surrounding data . Lu et al . ( 2018 ) used a curriculum learning strategy where the teacher net is trained on a small kosher dataset to determine if a datum is clean ; then the learnt curriculum that gives the weight to each datum is fed into the student net for the training and inference . ( Yu et al. , 2019 ; Bo et al. , 2018 ) trained two synchronized networks ; the confidence and consistency of the two networks are utilized to identify clean data . Wu et al . ( 2020 ) selected the clean data by investigating the topological structures of the training data in the learned feature space . For completeness , we also refer to other methods of similar design ( Li et al. , 2017 ; Vahdat , 2017 ; Andreas et al. , 2017 ; Ma et al. , 2018 ; Thulasidasan et al. , 2019 ; Arazo et al. , 2019 ; Shu et al. , 2019 ; Yi & Wu , 2019 ) . As for theoretical guarantees , Ren et al . ( 2018 ) proposed an algorithm that iteratively re-weights each data point by solving an optimization problem . They proved the convergence of the training , but provided no guarantees that the model converges to an ideal one . Amid et al . ( 2019b ) generalized the work of ( Amid et al. , 2019a ) and proposed a tempered matching loss . They showed that when the final softmax layer is replaced by the bi-tempered loss , the resulting classifier will be Bayes consistent . Zheng et al . ( 2020 ) proved a one-shot guarantee for their data-recalibrating method ; but the convergence of the model is not guaranteed . Our method is the first data-recalibrating method which is guaranteed to converge to a well-behaved classifier . 2 METHOD . We start by introducing the family of Poly-Margin Diminishing ( PMD ) label noise . In Section 2.2 , we present our main algorithm . Finally , we prove the correctness of our algorithm in Section 3 . Notations and preliminaries . Although the noise setting and algorithm naturally generalize to multiclass , for simplicity we focus on binary classification . Let the feature space be X . We assume the data ( x , y ) is sampled from an underlying distribution D on X × { 0 , 1 } . Define the posterior probability η ( x ) = P [ y = 1 | x ] . Let τ0,1 ( x ) = P [ ỹ = 1 | y = 0 , x ] and τ1,0 ( x ) = P [ ỹ = 0 | y = 1 , x ] be the noise functions , where ỹ denotes the corrupted label . For example , if a datum x has true label y = 0 , it has τ0,1 ( x ) chance to be corrupted to 1 . Similarly , it has τ1,0 ( x ) chance to be corrupted from 1 to 0 . Let η̃ ( x ) = P [ ỹ = 1 | x ] be the noisy posterior probability of ỹ = 1 given feature x . Let η∗ ( x ) = I { η ( x ) ≥ 12 } be the ( clean ) Bayes optimal classifier , where IA equals 1 if A is true , and 0 otherwise . Finally , let f ( x ) : X → [ 0 , 1 ] be the classifier scoring function ( the softmax output of a neural network in this paper ) . 2.1 POLY-MARGIN DIMINISHING NOISE . We first introduce the family of noise functions τ this paper will address . We introduce the concept of polynomial margin diminishing noise ( PMD noise ) , which only upper bounds the noise τ in a certain level set of η ( x ) , thus allowing τ to be arbitrarily high outside the restricted domain . This formulation not only covers the feature-independent scenario but also generalizes scenarios proposed by ( Du & Cai , 2015 ; Menon et al. , 2018 ; Cheng et al. , 2020 ) . Definition 1 ( PMD noise ) . A pair of noise functions τ0,1 ( x ) and τ1,0 ( x ) are polynomial-margin diminishing ( PMD ) , if there exist constants t0 ∈ ( 0 , 12 ) , and c1 , c2 > 0 such that : τ1,0 ( x ) ≤ c1 [ 1− η ( x ) ] 1+c2 ; ∀η ( x ) ≥ 1 2 + t0 , and τ0,1 ( x ) ≤ c1η ( x ) 1+c2 ; ∀η ( x ) ≤ 1 2 − t0 . ( 1 ) We abuse notation by referring to t0 as the “ margin ” of τ . Note that the PMD condition only requires the upper bound on τ to be polynomial and monotonically decreasing in the region where the Bayes classifier is fairly confident . For the region { x : |η ( x ) − 12 | < t0 } , we allow both τ0,1 ( x ) and τ1,0 ( x ) to be arbitrary . Figure 2 ( d ) illustrates the upper bound ( orange curve ) and a sample noise function ( blue curve ) . We also show the corrupted data according to this noise function ( black points are the clean data whereas red points are the data with corrupted labels ) . The PMD noise family is much more general than existing noise assumptions . For example , the boundary consistent noise ( BCN ) ( Du & Cai , 2015 ; Menon et al. , 2018 ) assumes a noise function that monotonically decreases as the data are moving away from the decision boundary . See Figure 2 ( c ) for an illustration . This noise is much more restrictive compared to our PMD noise which ( 1 ) only requires a monotonic upper bound , and ( 2 ) allows arbitrary noise strength in a wide buffer near the decision boundary . Figure 2 ( b ) shows a traditional feature-independent noise pattern ( Reed et al. , 2014 ; Patrini et al. , 2017 ) , which assumes τ0,1 ( x ) ( resp . τ1,0 ( x ) ) to be a constant independent of x .
The paper presents a learning method for the scenario of feature dependent label noise. A framework where label noise diminishes away from the decision boundary is established and a relabeling strategy based on this by relabeling highly confident points is proposed. The method is a straight-forward adaptive method which the authors both theoretically and empirically explore in detail.
SP:470d98d23a746a65b18404aaabf1a15d34fc24fa
Learning with Feature-Dependent Label Noise: A Progressive Approach
1 INTRODUCTION . Addressing noise in training set labels is an important problem in supervised learning . Incorrect annotation of data is inevitable in large-scale data collection , due to intrinsic ambiguity of data/class and mistakes of human/automatic annotators ( Yan et al. , 2014 ; Andreas et al. , 2017 ) . Developing methods that are resilient to label noise is therefore crucial in real-life applications . Classical approaches take a rather simplistic i.i.d . assumption on the label noise , i.e. , the label corruption is independent and identically distributed and thus is feature-independent . Methods based on this assumption either explicitly estimate the noise pattern ( Reed et al. , 2014 ; Patrini et al. , 2017 ; Dan et al. , 2019 ; Xu et al. , 2019 ) or introduce extra regularizer/loss terms ( Natarajan et al. , 2013 ; Van Rooyen et al. , 2015 ; Xiao et al. , 2015 ; Zhang & Sabuncu , 2018 ; Ma et al. , 2018 ; Arazo et al. , 2019 ; Shen & Sanghavi , 2019 ) . Some results prove that the commonly used losses are naturally robust against such i.i.d . label noise ( Manwani & Sastry , 2013 ; Ghosh et al. , 2015 ; Gao et al. , 2016 ; Ghosh et al. , 2017 ; Charoenphakdee et al. , 2019 ; Hu et al. , 2020 ) . Although these methods come with theoretical guarantees , they usually do not perform as well as expected in practice due to the unrealistic i.i.d . assumption on noise . This is likely because label noise is heterogeneous and feature-dependent . A cat with an intrinsically ambiguous appearance is more likely to be mislabeled as a dog . An image with poor lighting or severe occlusion can be mislabeled , as important visual clues are imperceptible . Methods that can combat label noise of a much more general form are very much needed to address real-world challenges . To adapt to the heterogeneous label noise , state-of-the-arts ( SOTAs ) often resort to a data-recalibrating strategy . They progressively identify trustworthy data or correct data labels , and then train using these data ( Tanaka et al. , 2018 ; Wang et al. , 2018 ; Lu et al. , 2018 ; Li et al. , 2019 ) . The models gradually improve as more clean data are collected or more labels are corrected , eventually converging to models of high accuracy . These data-recalibrating methods best leverage the learning power of deep neural nets and achieve superior performance in practice . However , their underlying mechanism remains a mystery . No methods in this category can provide theoretical insights as to why the model ∗Equal contributions . can converge to an ideal one . Thus , these methods require careful hyperparameter tuning and are hard to generalize . In this paper , we propose a novel and principled method that specifically targets the heterogeneous , feature-dependent label noise . Unlike previous methods , we target a much more general family of noise , called Polynomial Margin Diminishing ( PMD ) label noise . In this noise family , we allow arbitrary noise level except for data far away from the true decision boundary . This is consistent with the real-world scenario ; data near the decision boundary are harder to distinguish and more likely to be mislabeled . Meanwhile , a datum far away from the decision boundary is a typical example of its true class and should have a reasonably bounded noise level . Assuming this new PMD noise family , we propose a theoretically-guaranteed data-recalibrating algorithm that gradually corrects labels based on the noisy classifier ’ s confidence . We start from data points with high confidence , and correct the labels of these data using the predictions of the noisy classifier . Next , the model is improved using cleaned labels . We continue alternating the label correction and model improvement until it converges . See Figure 1 for an illustration . Our main theorem shows that with a theory-informed criterion for label correction at each iteration , the improvement of the label purity is guaranteed . Thus the model is guaranteed to improve with sufficient rate through iterations and eventually becomes consistent with the Bayes optimal classifier . Beside the theoretical strength , we also demonstrate the power of our method in practice . Our method outperforms others on CIFAR-10/100 with various synthetic noise patterns . We also evaluate our method against SOTAs on three real-world datasets with unknown noise patterns . To the best of our knowledge , our method is the first data-recalibrating method that is theoretically guaranteed to converge to an ideal model . The PMD noise family encompasses a broad spectrum of heterogeneous and feature-dependent noise , and better approximates the real-world scenario . It also provides a novel theoretical setting for the study of label noise . Related works . We review works that do not assume an i.i.d . label noise . Menon et al . ( 2018 ) generalized the work of ( Ghosh et al. , 2015 ) and provided an elegant theoretical framework , showing that loss functions fulfilling certain conditions naturally resist instance-dependent noise . The method can achieve even better theoretical properties ( i.e. , Bayes-consistency ) with stronger assumption on the clean posterior probability η . In practice , this method has not been extended to deep neural networks . Cheng et al . ( 2020 ) proposed an active learning method for instance-dependent label noise . The algorithm iteratively queries clean labels from an oracle on carefully selected data . However , this approach is not applicable to settings where kosher annotations are unavailable . Another contemporary work ( Chen et al. , 2021 ) showed that the noise in real-world dataset is unlikely to be i.i.d. , and proposed to fix the noisy labels by averaging the network predictions on each instance over the whole training process . While being effective , their method lacks theoretical guarantees . Chen et al . ( 2019 ) showed by regulating the topology of a classifier ’ s decision boundary , one can improve the model ’ s robustness against label noise . Data-recalibrating methods use noisy networks ’ predictions to iteratively select/correct data and improve the models . Tanaka et al . ( 2018 ) introduced a joint training framework which simultaneously enforces the network to be consistent with its own predictions and corrects the noisy labels during training . Wang et al . ( 2018 ) identified noisy labels as outliers based on their label consistencies with surrounding data . Lu et al . ( 2018 ) used a curriculum learning strategy where the teacher net is trained on a small kosher dataset to determine if a datum is clean ; then the learnt curriculum that gives the weight to each datum is fed into the student net for the training and inference . ( Yu et al. , 2019 ; Bo et al. , 2018 ) trained two synchronized networks ; the confidence and consistency of the two networks are utilized to identify clean data . Wu et al . ( 2020 ) selected the clean data by investigating the topological structures of the training data in the learned feature space . For completeness , we also refer to other methods of similar design ( Li et al. , 2017 ; Vahdat , 2017 ; Andreas et al. , 2017 ; Ma et al. , 2018 ; Thulasidasan et al. , 2019 ; Arazo et al. , 2019 ; Shu et al. , 2019 ; Yi & Wu , 2019 ) . As for theoretical guarantees , Ren et al . ( 2018 ) proposed an algorithm that iteratively re-weights each data point by solving an optimization problem . They proved the convergence of the training , but provided no guarantees that the model converges to an ideal one . Amid et al . ( 2019b ) generalized the work of ( Amid et al. , 2019a ) and proposed a tempered matching loss . They showed that when the final softmax layer is replaced by the bi-tempered loss , the resulting classifier will be Bayes consistent . Zheng et al . ( 2020 ) proved a one-shot guarantee for their data-recalibrating method ; but the convergence of the model is not guaranteed . Our method is the first data-recalibrating method which is guaranteed to converge to a well-behaved classifier . 2 METHOD . We start by introducing the family of Poly-Margin Diminishing ( PMD ) label noise . In Section 2.2 , we present our main algorithm . Finally , we prove the correctness of our algorithm in Section 3 . Notations and preliminaries . Although the noise setting and algorithm naturally generalize to multiclass , for simplicity we focus on binary classification . Let the feature space be X . We assume the data ( x , y ) is sampled from an underlying distribution D on X × { 0 , 1 } . Define the posterior probability η ( x ) = P [ y = 1 | x ] . Let τ0,1 ( x ) = P [ ỹ = 1 | y = 0 , x ] and τ1,0 ( x ) = P [ ỹ = 0 | y = 1 , x ] be the noise functions , where ỹ denotes the corrupted label . For example , if a datum x has true label y = 0 , it has τ0,1 ( x ) chance to be corrupted to 1 . Similarly , it has τ1,0 ( x ) chance to be corrupted from 1 to 0 . Let η̃ ( x ) = P [ ỹ = 1 | x ] be the noisy posterior probability of ỹ = 1 given feature x . Let η∗ ( x ) = I { η ( x ) ≥ 12 } be the ( clean ) Bayes optimal classifier , where IA equals 1 if A is true , and 0 otherwise . Finally , let f ( x ) : X → [ 0 , 1 ] be the classifier scoring function ( the softmax output of a neural network in this paper ) . 2.1 POLY-MARGIN DIMINISHING NOISE . We first introduce the family of noise functions τ this paper will address . We introduce the concept of polynomial margin diminishing noise ( PMD noise ) , which only upper bounds the noise τ in a certain level set of η ( x ) , thus allowing τ to be arbitrarily high outside the restricted domain . This formulation not only covers the feature-independent scenario but also generalizes scenarios proposed by ( Du & Cai , 2015 ; Menon et al. , 2018 ; Cheng et al. , 2020 ) . Definition 1 ( PMD noise ) . A pair of noise functions τ0,1 ( x ) and τ1,0 ( x ) are polynomial-margin diminishing ( PMD ) , if there exist constants t0 ∈ ( 0 , 12 ) , and c1 , c2 > 0 such that : τ1,0 ( x ) ≤ c1 [ 1− η ( x ) ] 1+c2 ; ∀η ( x ) ≥ 1 2 + t0 , and τ0,1 ( x ) ≤ c1η ( x ) 1+c2 ; ∀η ( x ) ≤ 1 2 − t0 . ( 1 ) We abuse notation by referring to t0 as the “ margin ” of τ . Note that the PMD condition only requires the upper bound on τ to be polynomial and monotonically decreasing in the region where the Bayes classifier is fairly confident . For the region { x : |η ( x ) − 12 | < t0 } , we allow both τ0,1 ( x ) and τ1,0 ( x ) to be arbitrary . Figure 2 ( d ) illustrates the upper bound ( orange curve ) and a sample noise function ( blue curve ) . We also show the corrupted data according to this noise function ( black points are the clean data whereas red points are the data with corrupted labels ) . The PMD noise family is much more general than existing noise assumptions . For example , the boundary consistent noise ( BCN ) ( Du & Cai , 2015 ; Menon et al. , 2018 ) assumes a noise function that monotonically decreases as the data are moving away from the decision boundary . See Figure 2 ( c ) for an illustration . This noise is much more restrictive compared to our PMD noise which ( 1 ) only requires a monotonic upper bound , and ( 2 ) allows arbitrary noise strength in a wide buffer near the decision boundary . Figure 2 ( b ) shows a traditional feature-independent noise pattern ( Reed et al. , 2014 ; Patrini et al. , 2017 ) , which assumes τ0,1 ( x ) ( resp . τ1,0 ( x ) ) to be a constant independent of x .
Label noise is very frequently in many real world applications. However, the noise can be with different distributions. If we build the learning model under a certain distribution, it is difficult to capture the discriminative information. In this paper, without assuming that the noise is a certain distribution, the proposed method can handle the general noise, and it mainly target a new family of feature-dependent label noise, which is much more general than commonly used i.i.d. label noise and encompasses a broad spectrum of noise patterns. The experimental results show that the proposed method is promising. Meanwhile, the theoretical analysis of the proposed method is well inferred.
SP:470d98d23a746a65b18404aaabf1a15d34fc24fa
Non-Negative Bregman Divergence Minimization for Deep Direct Density Ratio Estimation
1 INTRODUCTION . The density ratio estimation ( DRE ) problem has attracted a great deal of attention as an essential task in data science for its various industrial applications , such as domain adaptation ( Shimodaira , 2000 ; Plank et al. , 2014 ; Reddi et al. , 2015 ) , learning with noisy labels ( Liu & Tao , 2014 ; Fang et al. , 2020 ) , anomaly detection ( Smola et al. , 2009 ; Hido et al. , 2011 ; Abe & Sugiyama , 2019 ) , twosample testing ( Keziou & Leoni-Aubin , 2005 ; Kanamori et al. , 2010 ; Sugiyama et al. , 2011a ) , causal inference ( Kato et al. , 2020 ) , change point detection in time series ( Kawahara & Sugiyama , 2009 ) , and binary classification only from positive and unlabeled data ( PU learning ; Kato et al. , 2019 ) . For example , anomaly detection is not easy to perform based on standard machine learning methods such as binary classification since anomalous data is often scarce , but it can be solved by estimating the density ratio when training data without anomaly as well as unlabeled test data are available ( Hido et al. , 2008 ) . Among various approaches for DRE , we focus on the Bregman ( BR ) divergence minimization framework ( Bregman , 1967 ; Sugiyama et al. , 2011b ) that is a generalization of various DRE methods , e.g. , the moment matching ( Huang et al. , 2007 ; Gretton et al. , 2009 ) , the probabilistic classification ( Qin , 1998 ; Cheng & Chu , 2004 ) , the density matching ( Nguyen et al. , 2010 ; Yamada et al. , 2010 ) , and the density-ratio fitting ( Kanamori et al. , 2009 ) . Recently , Kato et al . ( 2019 ) also proposed using the risk of PU learning for DRE , which also can be generalized from the BR divergence minimization viewpoint , as we show below . However , existing DRE methods mainly adopt a linear-in-parameter model for nonparametric DRE ( Kanamori et al. , 2012 ) and rarely discussed the use of more flexible models , such as deep neural networks , while recent developments in machine learning suggest that deep neural networks can significantly improve the performances for various tasks , such as computer vision ( Krizhevsky et al. , 2012 ) and natural language processing ( Bengio et al. , 2001 ) . This motivates us to use deep neural networks for DRE . However , existing DRE studies have not fully discussed using such state-of-theart deep neural networks . For instance , although Nam & Sugiyama ( 2015 ) and Abe & Sugiyama ( 2019 ) proposed using neural networks for DRE , their neural networks are simple and shallow . When using deep neural networks in combination with empirical minimization of BR divergence , we often observe a serious over-fitting problem as demonstrated through experiments in Figure 2 of Section 5 . We hypothesize that this is mainly because there is no lower bound in the empirically BR divergence approximating by finite samples , i.e. , we can achieve an infinitely negative value in minimization . This hypothesis is based on Kiryo et al . ( 2017 ) , which reports a similar problem in PU learning . While Kiryo et al . ( 2017 ) call this phenomena over-fitting , we refer to it as train-loss hacking because the nuance is a bit different from the standard meaning of overfitting . Here , we briefly introduce the train-loss hacking discussed in the PU learning literature Kiryo et al . ( 2017 ) . In a standard binary classification problem , we train a classifier ψ by minimizing the following empirical risk using { ( yi , Xi ) } ni=1 : 1 n n∑ i=1 1 [ yi = +1 ] ℓ ( ψ ( Xi ) ) + 1 n n∑ i=1 1 [ yi = −1 ] ℓ ( −ψ ( Xi ) ) , ( 1 ) where yi ∈ { ±1 } is a binary label , Xi is a feature , and ℓ is a loss function . On the other hand , in PU learning formulated by du Plessis et al . ( 2015 ) , because we only have positive data { ( y′i = +1 , X ′i ) } n ′ i=1 and unlabeled data { ( x′′j ) } n ′′ j=1 , we minimize the following alternative empirical risk : π n′ n′∑ i=1 ℓ ( ψ ( X ′i ) − π n′ n′∑ i=1 ℓ ( −ψ ( X ′i ) ) ︸ ︷︷ ︸ Cause of train-loss hacking . + 1 n′′ n′′∑ j=1 ℓ ( −ψ ( X ′′j ) ) , ( 2 ) where π is a hyper-parameter representing p ( y = +1 ) . Note that the empirical risk ( 2 ) is unbiased to the population binary classification risk ( 1 ) ( du Plessis et al. , 2015 ) . While the the empirical risk ( 1 ) of the standard binary classification is lower bounded under an appropiate choise of ℓ , the empirical risk ( 2 ) of PU learning proposed by du Plessis et al . ( 2015 ) is not lower bounded owing to the existence of the second term . Therefore , if a model is sufficiently flexible , we can significantly minimize the empirical risk only by minimizing the second term − πn′ ∑n′ i=1 ℓ ( −ψ ( X ′i ) ) without increasing the other terms . Kiryo et al . ( 2017 ) proposed non-negative risk correction for avoiding this problem when using neural networks . We discuss this problem again in Section 2 and Figure 1 . In existing DRE literature , this train-loss hacking has rarely been discussed , although we often face this problem when using neural networks , as mentioned in Section 5 . One reason for this is that the existing method assumes a linear-in-parameter model for a density ratio model ( Kanamori et al. , 2012 ) , which is not so flexible as neural networks and do not cause the phenomenon . To mitigate the train-loss hacking , we propose a general procedure to modify the empirical BR divergence using the prior knowledge of the upper bound of the density ratio . Our idea of the correction is inspired by Kiryo et al . ( 2017 ) . However , their idea of non-negative correction is only immediately applicable to the binary classification ; thus we require a non-trivial rewriting of the BR divergence to generalize the approach to our problem . We call the proposed empirical risk the non-negative BR ( nnBR ) divergence , and it is a generalization of the method of Kiryo et al . ( 2017 ) . In addition , for a special case of DRE , we can still use a lower bounded loss for DRE ( See bounded uLSIF and BKL introduced in the following section ) . However , such a loss also suffers from the train-loss hacking ( bounded uLSIF of Figure 2 and BKL-NN of Figure 4 ) . In the case , the train-loss hacking is caused because the loss sticks to the lower bound . This type of train-loss hacking is also avoided by using the proposed nnBR divergence . Our main contributions are : ( 1 ) the proposal of a general procedure to modify a BR divergence to enable DRE with flexible models , ( 2 ) theoretical justification of the proposed estimator , and ( 3 ) the experimental validation of the proposed method using benchmark data . 2 PROBLEM SETTING . Let X nu ⊆ Rd and X de ⊆ Rd be the spaces of the d-dimensional covariates { Xnui } nnu i=1 and { Xdei } nde i=1 , respectively , which are independent and identically distributed ( i.i.d . ) as { Xnui } nnu i=1 i.i.d.∼ pnu ( X ) and { Xdei } nde i=1 i.i.d.∼ pde ( X ) , where pnu ( X ) and pde ( X ) are probability densities over X nu and X de , respectively . Here , “ nu ” and “ de ” indicate the numerator and the denominator . Our goal is to estimate the density ratio r∗ ( X ) = pnu ( X ) pde ( X ) . To identify the density ratio , we assume the following : Assumption 1 . The density pnu ( X ) is strictly positive over the space X nu , the density pde ( X ) is strictly positive over the space X de , and X nu ⊆ X de . In addition , the density ratio r∗ is bounded from above on X de : R = supX∈Xde r∗ ( X ) < ∞ . Note that the assumption X nu ⊆ X de is typical in the context of DRE . For instance , in anomaly detection with unlabeled test data , X de corresponds to a sample space including clean and anomaly data and X de corresponds to a sample space only with clean data . Here , we introduce the notation of this paper . Let Enu and Ede denote the expectations over pnu ( X ) and pde ( X ) , respectively . Let Ênu and Êde denote the sample average over { Xnui } nnu i=1 and { Xdei } nde i=1 , respectively . Let H ⊂ { r : Rd → ( br , Br ) } be the hypothesis class of the density ratio , where 0 ≤ br < R < Br . 2.1 DENSITY RATIO MATCHING UNDER THE BREGMAN DIVERGENCE . A naive way to implement DRE would be to estimate the numerator and the denominator densities separately and take the ratio . However , according to Vapnik ’ s principle , we should avoid solving a more difficult intermediate problem than the target problem ( Vapnik , 1998 ) . Therefore , various methods for directly estimating the density ratio model have been proposed ( Gretton et al. , 2009 ; Sugiyama et al. , 2008 ; Kanamori et al. , 2009 ; Nguyen et al. , 2010 ; Yamada et al. , 2010 ; Kato et al. , 2019 ) . Sugiyama et al . ( 2011b ) showed that these methods can be generalized as the density ratio matching under the BR divergence . The BR divergence is an extension of the Euclidean distance to a class of divergences that share similar properties ( Bregman , 1967 ) . Formally , let f : ( br , Br ) → R be a twice continuously differentiable convex function with a bounded derivative . Then , the point-wise BR divergence associated with f from t∗ to t is defined as B̈Rf ( t∗‖t ) : = f ( t∗ ) − f ( t ) − ∂f ( t ) ( t∗ − t ) , where ∂f is the derivative of f . Now , the discrepancy from the true density ratio function r∗ to a density ratio model r is measured by integrating the point-wise BR divergence as follows ( Sugiyama et al. , 2011b ) : B̈Rf ( r ∗‖r ) : = ∫ pde ( X ) ( f ( r∗ ( X ) ) − f ( r ( X ) ) − ∂f ( r ( X ) ) { r∗ ( X ) − r ( X ) } ) dX . ( 3 ) We estimate the density ratio by finding a function r that minimizes the BR divergence defined in ( 3 ) . Here , we subtract the constant BR = Ede [ f ( r∗ ( X ) ) ] from ( 3 ) to obtain BRf ( r ∗‖r ) : = ∫ pde ( X ) ( ∂f ( r ( X ) ) r ( X ) − f ( r ( X ) ) ) dX − ∫ pnu ( X ) ∂f ( r ( X ) ) dX . ( 4 ) Here , Sugiyama et al . ( 2012 ) used r∗ ( X ) pde = pnu for removing r∗ ( X ) , which is a common technique in the DRE literature . Since BR is constant with respect to r , we have argminr BR ′ f ( r ∗‖r ) = argminr BRf ( r ∗‖r ) . Then , let us define the sample analogue of ( 4 ) as B̂Rf ( r ) : = Êde [ ∂f ( r ( Xi ) ) r ( Xi ) − f ( r ( Xi ) ) ] − Ênu [ ∂f ( r ( Xj ) ) ] . ( 5 ) For a hypothesis class H , we estimate the density ratio by solving minr∈H B̂Rf ( r∗‖r ) .
The paper studies density ratio estimation (DRE), addressing the 'train-loss hacking' problems which often arise and hamper estimation when models are too flexible. The authors propose a new risk estimator for DRE, providing a non-negative Bregman divergence estimator, with the non-negative correction. Theoretical analyses are shown with the estimation error and empirical analyses are examined in multiple machine learning problem settings.  
SP:aeb3da5e74ad99557ef60627ab355b6402a88e77
Non-Negative Bregman Divergence Minimization for Deep Direct Density Ratio Estimation
1 INTRODUCTION . The density ratio estimation ( DRE ) problem has attracted a great deal of attention as an essential task in data science for its various industrial applications , such as domain adaptation ( Shimodaira , 2000 ; Plank et al. , 2014 ; Reddi et al. , 2015 ) , learning with noisy labels ( Liu & Tao , 2014 ; Fang et al. , 2020 ) , anomaly detection ( Smola et al. , 2009 ; Hido et al. , 2011 ; Abe & Sugiyama , 2019 ) , twosample testing ( Keziou & Leoni-Aubin , 2005 ; Kanamori et al. , 2010 ; Sugiyama et al. , 2011a ) , causal inference ( Kato et al. , 2020 ) , change point detection in time series ( Kawahara & Sugiyama , 2009 ) , and binary classification only from positive and unlabeled data ( PU learning ; Kato et al. , 2019 ) . For example , anomaly detection is not easy to perform based on standard machine learning methods such as binary classification since anomalous data is often scarce , but it can be solved by estimating the density ratio when training data without anomaly as well as unlabeled test data are available ( Hido et al. , 2008 ) . Among various approaches for DRE , we focus on the Bregman ( BR ) divergence minimization framework ( Bregman , 1967 ; Sugiyama et al. , 2011b ) that is a generalization of various DRE methods , e.g. , the moment matching ( Huang et al. , 2007 ; Gretton et al. , 2009 ) , the probabilistic classification ( Qin , 1998 ; Cheng & Chu , 2004 ) , the density matching ( Nguyen et al. , 2010 ; Yamada et al. , 2010 ) , and the density-ratio fitting ( Kanamori et al. , 2009 ) . Recently , Kato et al . ( 2019 ) also proposed using the risk of PU learning for DRE , which also can be generalized from the BR divergence minimization viewpoint , as we show below . However , existing DRE methods mainly adopt a linear-in-parameter model for nonparametric DRE ( Kanamori et al. , 2012 ) and rarely discussed the use of more flexible models , such as deep neural networks , while recent developments in machine learning suggest that deep neural networks can significantly improve the performances for various tasks , such as computer vision ( Krizhevsky et al. , 2012 ) and natural language processing ( Bengio et al. , 2001 ) . This motivates us to use deep neural networks for DRE . However , existing DRE studies have not fully discussed using such state-of-theart deep neural networks . For instance , although Nam & Sugiyama ( 2015 ) and Abe & Sugiyama ( 2019 ) proposed using neural networks for DRE , their neural networks are simple and shallow . When using deep neural networks in combination with empirical minimization of BR divergence , we often observe a serious over-fitting problem as demonstrated through experiments in Figure 2 of Section 5 . We hypothesize that this is mainly because there is no lower bound in the empirically BR divergence approximating by finite samples , i.e. , we can achieve an infinitely negative value in minimization . This hypothesis is based on Kiryo et al . ( 2017 ) , which reports a similar problem in PU learning . While Kiryo et al . ( 2017 ) call this phenomena over-fitting , we refer to it as train-loss hacking because the nuance is a bit different from the standard meaning of overfitting . Here , we briefly introduce the train-loss hacking discussed in the PU learning literature Kiryo et al . ( 2017 ) . In a standard binary classification problem , we train a classifier ψ by minimizing the following empirical risk using { ( yi , Xi ) } ni=1 : 1 n n∑ i=1 1 [ yi = +1 ] ℓ ( ψ ( Xi ) ) + 1 n n∑ i=1 1 [ yi = −1 ] ℓ ( −ψ ( Xi ) ) , ( 1 ) where yi ∈ { ±1 } is a binary label , Xi is a feature , and ℓ is a loss function . On the other hand , in PU learning formulated by du Plessis et al . ( 2015 ) , because we only have positive data { ( y′i = +1 , X ′i ) } n ′ i=1 and unlabeled data { ( x′′j ) } n ′′ j=1 , we minimize the following alternative empirical risk : π n′ n′∑ i=1 ℓ ( ψ ( X ′i ) − π n′ n′∑ i=1 ℓ ( −ψ ( X ′i ) ) ︸ ︷︷ ︸ Cause of train-loss hacking . + 1 n′′ n′′∑ j=1 ℓ ( −ψ ( X ′′j ) ) , ( 2 ) where π is a hyper-parameter representing p ( y = +1 ) . Note that the empirical risk ( 2 ) is unbiased to the population binary classification risk ( 1 ) ( du Plessis et al. , 2015 ) . While the the empirical risk ( 1 ) of the standard binary classification is lower bounded under an appropiate choise of ℓ , the empirical risk ( 2 ) of PU learning proposed by du Plessis et al . ( 2015 ) is not lower bounded owing to the existence of the second term . Therefore , if a model is sufficiently flexible , we can significantly minimize the empirical risk only by minimizing the second term − πn′ ∑n′ i=1 ℓ ( −ψ ( X ′i ) ) without increasing the other terms . Kiryo et al . ( 2017 ) proposed non-negative risk correction for avoiding this problem when using neural networks . We discuss this problem again in Section 2 and Figure 1 . In existing DRE literature , this train-loss hacking has rarely been discussed , although we often face this problem when using neural networks , as mentioned in Section 5 . One reason for this is that the existing method assumes a linear-in-parameter model for a density ratio model ( Kanamori et al. , 2012 ) , which is not so flexible as neural networks and do not cause the phenomenon . To mitigate the train-loss hacking , we propose a general procedure to modify the empirical BR divergence using the prior knowledge of the upper bound of the density ratio . Our idea of the correction is inspired by Kiryo et al . ( 2017 ) . However , their idea of non-negative correction is only immediately applicable to the binary classification ; thus we require a non-trivial rewriting of the BR divergence to generalize the approach to our problem . We call the proposed empirical risk the non-negative BR ( nnBR ) divergence , and it is a generalization of the method of Kiryo et al . ( 2017 ) . In addition , for a special case of DRE , we can still use a lower bounded loss for DRE ( See bounded uLSIF and BKL introduced in the following section ) . However , such a loss also suffers from the train-loss hacking ( bounded uLSIF of Figure 2 and BKL-NN of Figure 4 ) . In the case , the train-loss hacking is caused because the loss sticks to the lower bound . This type of train-loss hacking is also avoided by using the proposed nnBR divergence . Our main contributions are : ( 1 ) the proposal of a general procedure to modify a BR divergence to enable DRE with flexible models , ( 2 ) theoretical justification of the proposed estimator , and ( 3 ) the experimental validation of the proposed method using benchmark data . 2 PROBLEM SETTING . Let X nu ⊆ Rd and X de ⊆ Rd be the spaces of the d-dimensional covariates { Xnui } nnu i=1 and { Xdei } nde i=1 , respectively , which are independent and identically distributed ( i.i.d . ) as { Xnui } nnu i=1 i.i.d.∼ pnu ( X ) and { Xdei } nde i=1 i.i.d.∼ pde ( X ) , where pnu ( X ) and pde ( X ) are probability densities over X nu and X de , respectively . Here , “ nu ” and “ de ” indicate the numerator and the denominator . Our goal is to estimate the density ratio r∗ ( X ) = pnu ( X ) pde ( X ) . To identify the density ratio , we assume the following : Assumption 1 . The density pnu ( X ) is strictly positive over the space X nu , the density pde ( X ) is strictly positive over the space X de , and X nu ⊆ X de . In addition , the density ratio r∗ is bounded from above on X de : R = supX∈Xde r∗ ( X ) < ∞ . Note that the assumption X nu ⊆ X de is typical in the context of DRE . For instance , in anomaly detection with unlabeled test data , X de corresponds to a sample space including clean and anomaly data and X de corresponds to a sample space only with clean data . Here , we introduce the notation of this paper . Let Enu and Ede denote the expectations over pnu ( X ) and pde ( X ) , respectively . Let Ênu and Êde denote the sample average over { Xnui } nnu i=1 and { Xdei } nde i=1 , respectively . Let H ⊂ { r : Rd → ( br , Br ) } be the hypothesis class of the density ratio , where 0 ≤ br < R < Br . 2.1 DENSITY RATIO MATCHING UNDER THE BREGMAN DIVERGENCE . A naive way to implement DRE would be to estimate the numerator and the denominator densities separately and take the ratio . However , according to Vapnik ’ s principle , we should avoid solving a more difficult intermediate problem than the target problem ( Vapnik , 1998 ) . Therefore , various methods for directly estimating the density ratio model have been proposed ( Gretton et al. , 2009 ; Sugiyama et al. , 2008 ; Kanamori et al. , 2009 ; Nguyen et al. , 2010 ; Yamada et al. , 2010 ; Kato et al. , 2019 ) . Sugiyama et al . ( 2011b ) showed that these methods can be generalized as the density ratio matching under the BR divergence . The BR divergence is an extension of the Euclidean distance to a class of divergences that share similar properties ( Bregman , 1967 ) . Formally , let f : ( br , Br ) → R be a twice continuously differentiable convex function with a bounded derivative . Then , the point-wise BR divergence associated with f from t∗ to t is defined as B̈Rf ( t∗‖t ) : = f ( t∗ ) − f ( t ) − ∂f ( t ) ( t∗ − t ) , where ∂f is the derivative of f . Now , the discrepancy from the true density ratio function r∗ to a density ratio model r is measured by integrating the point-wise BR divergence as follows ( Sugiyama et al. , 2011b ) : B̈Rf ( r ∗‖r ) : = ∫ pde ( X ) ( f ( r∗ ( X ) ) − f ( r ( X ) ) − ∂f ( r ( X ) ) { r∗ ( X ) − r ( X ) } ) dX . ( 3 ) We estimate the density ratio by finding a function r that minimizes the BR divergence defined in ( 3 ) . Here , we subtract the constant BR = Ede [ f ( r∗ ( X ) ) ] from ( 3 ) to obtain BRf ( r ∗‖r ) : = ∫ pde ( X ) ( ∂f ( r ( X ) ) r ( X ) − f ( r ( X ) ) ) dX − ∫ pnu ( X ) ∂f ( r ( X ) ) dX . ( 4 ) Here , Sugiyama et al . ( 2012 ) used r∗ ( X ) pde = pnu for removing r∗ ( X ) , which is a common technique in the DRE literature . Since BR is constant with respect to r , we have argminr BR ′ f ( r ∗‖r ) = argminr BRf ( r ∗‖r ) . Then , let us define the sample analogue of ( 4 ) as B̂Rf ( r ) : = Êde [ ∂f ( r ( Xi ) ) r ( Xi ) − f ( r ( Xi ) ) ] − Ênu [ ∂f ( r ( Xj ) ) ] . ( 5 ) For a hypothesis class H , we estimate the density ratio by solving minr∈H B̂Rf ( r∗‖r ) .
The paper addresses learning the ratio between two densities from their samples, with applications to outlier detection and covariate shift adaption. An existing approach is to minimize the Bregman (BR) divergence's empirical approximation while modeling the density ratio function $r^*$ by a flexible hypothesis family, such as neural networks (NNs). A particular issue of such an approach (that the present work aims to resolve) is "train-loss hacking," meaning that the empirical loss can become arbitrarily large and negative. A new loss/objective based on BR divergence has been proposed, appearing on page 4, and is referred to as $\widehat{\text{nnBR}}_f(r)$. The major theoretical result, Theorem 1, states that minimizing the proposed objective effectively minimizes the BR divergence for sufficiently large sample sizes. Following this theorem and its corollary, the paper presents empirical evaluations, showing the new algorithm outperforms prior ones on standard datasets.
SP:aeb3da5e74ad99557ef60627ab355b6402a88e77
Rethinking Parameter Counting: Effective Dimensionality Revisited
1 INTRODUCTION . Parameter counting pervades the narrative in modern deep learning . “ One of the defining properties of deep learning is that models are chosen to have many more parameters than available training data . In light of this capacity for overfitting , it is remarkable that simple algorithms like SGD reliably return solutions with low test error ” ( Dziugaite and Roy , 2017 ) . “ Despite their massive size , successful deep artificial neural networks can exhibit a remarkably small difference between training and test performance ” ( Zhang et al. , 2017 ) . “ Increasing the number of parameters of neural networks can give much better prediction accuracy ” ( Shazeer et al. , 2017 ) . “ Scale sensitive complexity measures do not offer an explanation of why neural networks generalize better with over-parametrization ” ( Neyshabur et al. , 2018 ) . “ We train GPT-3 , an autoregressive language model with 175 billion parameters , 10× more than any previous non-sparse language model ” ( Brown et al. , 2020 ) . The number of model parameters explicitly appears in many modern generalization measures , such as in Equations 20 , 51 , 52 , 56 , 57 , 59 , and 60 of the recent study by Jiang et al . ( 2020 ) . Phenomena such as double descent are a consequence of parameter counting . Parameter counting even permeates our language , with expressions such as over-parametrization for more parameters than data points . But parameter counting can be a poor description of model complexity , model flexibility , and inductive biases . One can easily construct degenerate cases , such as predictions being generated by a sum of parameters , where the number of parameters is divorced from the statistical properties of the model . When reasoning about generalization , over-parametrization is besides the point : what matters is how the parameters combine with the functional form of the model . Indeed , the practical success of convolutional neural networks ( CNNs ) for image recognition tasks is almost entirely about the inductive biases of convolutional filters , depth , and sparsity , for extracting local similarities and hierarchical representations , rather than flexibility ( LeCun et al. , 1989 ; Szegedy et al. , 2015 ) . Convolutional neural networks have far fewer parameters than fully connected networks , yet can provide much better generalization . Moreover , width can provide flexibility , but it is depth that has made neural networks distinctive in their generalization abilities . In this paper , we gain a number of insights into modern neural networks through the lens of effective dimensionality , in place of simple parameter counting . Effective dimensionality , defined by the eigenspectrum of the Hessian on the training loss ( equation 2 , Section 2 ) , was used by MacKay ( 1992a ) to measure how many directions in the parameter space had been determined in a Bayesian neural network . There is immense value in revisiting effective dimensionality in the context of modern deep learning . We demonstrate that effective dimensionality can be used to explain phenomena such as double descent and width-depth trade-offs in architecture specification ( Section 5 ) . We also show that effective dimensionality provides a straightforward , scalable , and promising metric for generalization in modern deep learning , comparing to PAC-Bayes flatness and path-norm measures , two of the most successful measures in the recent study by Jiang et al . ( 2020 ) ( Section 6 ) . We additionally show how effective dimension can explain why subspace compression methods for neural networks ( Izmailov et al. , 2019 ; Li et al. , 2018 ) can be so effective in practice , demonstrating function-space homogeneity as we move in directions given by eigenvectors corresponding to the smallest eigenvalues of the Hessian ( Section 4 ) . We connect this finding with Bayesian Occam factors and minimum description length frameworks , providing an interpretation of effective dimensionality as model compression ( Section 4.3 ) . Moreover , we show that despite a seeming lack of determination in parameter space , a neural network can be relatively well-determined in function space ( Section 4 ) . Consider Figure 1 , where we see that once a model has achieved low training loss , the effective dimensionality , computed from training data alone , replicates double descent behaviour for neural networks . Models that are wider have both lower effective dimensionality and better generalization . Alternatively , in Figure 2 we see that width and depth determine effective dimensionality in different ways , though both are related to numbers of parameters . Remarkably , for models with low training loss ( above the green partition ) , the effective dimensionality closely tracks generalization performance for each combination of width and depth . We also see that wide but shallow models overfit , while depth helps provide lower effective dimensionality . When two models have the same training loss they can be viewed as providing a compression of the training data at the same fidelity , in which case the model with the lower effective dimensionality , and thus provides the better compression ( Section 4.3 ) — capturing more regularities — will tend to generalize better . In particular , effective dimension should be used to compare models with similarly low values of training loss . In this regime , we see that ED closely tracks generalization for both double descent and width-depth trade-offs . 2 POSTERIOR CONTRACTION , EFFECTIVE DIMENSION , AND THE HESSIAN . We consider a model , typically a neural network , f ( x ; θ ) , with inputs x and parameters θ ∈ Rk . We define the Hessian as the k × k matrix of second derivatives of the loss , Hθ = −∇∇θL ( θ , D ) , where D is the training data . To begin , we describe posterior contraction , effective dimensionality , and connections to the Hessian . 2.1 THE HESSIAN AND THE POSTERIOR DISTRIBUTION We begin by providing a simple example explaining the relationship between the posterior distribution over the model ’ s parameter , the amount of posterior contraction from the prior , and the Hessian of the negative log posterior . Figure 3 shows the prior and posterior distribution for a Bayesian linear regression model with a single parameter , with predictions generated by parameters drawn from these distributions . As expected , we see that the variance of the posterior distribution is significantly reduced from that of the prior ; we call the difference of the variance between the posterior and the prior the posterior contraction of the model . More specifically , as shown in Figure 3 , the arrival of data increases the curvature of the loss ( negative log posterior ) at the optimum . This increase in curvature of the loss that accompanies certainty about the parameters leads to an increase in the eigenvalues of the Hessian of the Growth in eigenvalues of the Hessian of the loss corresponds to increased certainty about parameters . 2.2 POSTERIOR CONTRACTION AND EFFECTIVE DIMENSIONALITY . When combined with the functional form of a model , a distribution over parameters p ( θ ) induces a distribution over functions p ( f ( x ; θ ) ) . The parameters are of little direct interest — what matters for generalization is the distribution over functions ( e.g . the right panel of Figure 3 ) . As parameter distributions concentrate around specific values we expect to generate less diverse functions , the behavior seen in Figure 3 . We show in Appendix E that we can describe posterior contraction in Bayesian linear regression , y ∼ N ( Φβ , σ2I ) , with isotropic Gaussian prior , β ∼ N ( 0 , α2Ik ) , as ∆post ( Φ ) = α 2 N∑ i=1 λi λi + α−2 , ( 1 ) where λi are the eigenvalues of Φ > Φ/α2 , the Hessian of the log likelihood , which is dependent on both the model and the training data . We refer to the summation in equation 1 as the effective dimensionality of Φ > Φ/α2 . We generalize equation 1 , defining the effective dimensionality of a symmetric matrixA ∈ Rk×k as Neff ( A , z ) = k∑ i=1 λi λi + z , ( 2 ) in which λi are the eigenvalues ofA and z > 0 is a regularization constant MacKay ( 1992a ) .1 Typically as neural networks are trained we observe a gap in the eigenspectrum of the Hessian of the loss ( Sagun et al. , 2017 ) ; a small number of eigenvalues become large while the rest are near zero . In computing effective dimensionality , eigenvalues much larger than z contribute a value of approximately one to the summation , and eigenvalues much smaller than z contribute a value of approximately zero . 1In discussing model generalization , we use effective dimensionality as short form for the effective dimensionality of the Hessian of the loss of a trained model . Therefore , the effective dimensionality explains the number of parameters that have been determined by the data , which corresponds to the number of parameters the model is using to make predictions . In comparing models of the same parameterization that achieve low loss on the training data , we expect models with lower effective dimensionality to generalize better — which is empirically verified in Figures 1 and 2 . The intuition built using Figure 3 carries through to this approximation : as the eigenvalues of the Hessian increase , the eigenvalues of the covariance matrix in our approximation to the posterior distribution shrink , further indicating contraction around the MAP estimate . Practical Computations For large neural networks computing the eigenvalues and eigenvectors of the Hessian of the loss is nontrivial . We estimate effective dimensionality by computing the dominant eigenvalues using the Lanczos algorithm implemented in GPyTorch , since many of the eigenvalues are typically close to zero and do not significantly contribute to the estimate ( Gardner et al. , 2018 ) . Hessian vector products as implemented in PyTorch take three backward passes ; In our experiments , we compute 100 Hessian vector products to produce 100 eigenvalues , so that the cost is about 1.5× the cost of standard training 2 . As a heuristic , one can set z using the connection with the prior variance α2 and ` 2 regularization , or to measure the number of relatively large eigenvalues . We use a value of z = 1 in equation 2 for all experiments , and show in Figure 4 that effective dimensionality for model comparison is highly robust to different values of z over the range of networks with near zero training loss in Figure 2 . For neural networks , the Hessian can have negative eigenvalues ( e.g. , Sagun et al. , 2017 ; Ghorbani et al. , 2019 ) ; however , these negative eigenvalues are in practice extremely small in magnitude compared to the positive ones , and do not practically impact the computations of effective dimensionality . The Hessian ( and its effective dimensionality ) is not invariant to re-parameterizations ( e.g . ReLU rescaling and batch normalization ) ( MacKay , 2003 , Chapter 27 ) . For this reason we assume a fixed parameterization , as is the case in practice , and compare between models of the same parameterization . 3 RELATED WORK . MacKay ( 1992a ) used effective dimensionality to measure posterior contraction in Bayesian neural networks . Effective dimensionality has also been used for measuring generalization error of kernel methods ( Caponnetto and Vito , 2007 ) . Various connections between flatness and generalization have been explored via Occam factors ( MacKay , 2003 ; Smith and Le , 2018 ) and minimum description length ( Hinton and Van Camp , 1993 ; Achille and Soatto , 2018 ) . Nakkiran et al . ( 2020 ) found generalization gains as neural networks become overparameterized , showing the double descent phenomenon ( e.g. , Belkin et al. , 2019a ; Nakkiran et al. , 2020 ) that occurs as the width increases in residual and convolutional neural networks . Flatness has also be considered in the PAC-Bayes literature ( e.g. , Dziugaite and Roy , 2017 ; Jiang et al. , 2020 ) , with Jiang et al . ( 2020 ) showing that PAC-Bayesian measures of flatness , in the sense of 2In Appendix I we provide an example of the insensitivity of effective dimensionality to the number of eigenvalues used . insensitivity to random perturbations , perform well relative to many generalization bounds . Zhou et al . ( 2018 ) used PAC-Bayesian compression arguments to construct non-vacuous bounds . Our work shows that effective dimensionality can shed light on a number of phenomena in modern deep learning , including double descent , width-depth trade-offs , and Bayesian subspace inference , while providing a straightforward and compelling generalization metric , relative to several of the highest performing metrics in Jiang et al . ( 2020 ) . We provide an extended discussion of historical perspectives in Appendix D .
In this article, the authors revisited the idea of *effective dimensionality* as a complexity measure for large-scale machine learning systems, and in particular, modern deep neural networks. Theoretical arguments were provided for linear and generalized linear models (Theorem 4.1 and 4.2). Connections were made between the proposed effective dimensionality and the double descent phenomenon, width-depth trade-off, function-space homogeneity, and other generalization measures in the literature. Experiments on linear models as well as deep networks (ResNet18) were provided to support the effectiveness of the proposed metric.
SP:66d433dfb2512bdb004f50f94d38514636a89fc6
Rethinking Parameter Counting: Effective Dimensionality Revisited
1 INTRODUCTION . Parameter counting pervades the narrative in modern deep learning . “ One of the defining properties of deep learning is that models are chosen to have many more parameters than available training data . In light of this capacity for overfitting , it is remarkable that simple algorithms like SGD reliably return solutions with low test error ” ( Dziugaite and Roy , 2017 ) . “ Despite their massive size , successful deep artificial neural networks can exhibit a remarkably small difference between training and test performance ” ( Zhang et al. , 2017 ) . “ Increasing the number of parameters of neural networks can give much better prediction accuracy ” ( Shazeer et al. , 2017 ) . “ Scale sensitive complexity measures do not offer an explanation of why neural networks generalize better with over-parametrization ” ( Neyshabur et al. , 2018 ) . “ We train GPT-3 , an autoregressive language model with 175 billion parameters , 10× more than any previous non-sparse language model ” ( Brown et al. , 2020 ) . The number of model parameters explicitly appears in many modern generalization measures , such as in Equations 20 , 51 , 52 , 56 , 57 , 59 , and 60 of the recent study by Jiang et al . ( 2020 ) . Phenomena such as double descent are a consequence of parameter counting . Parameter counting even permeates our language , with expressions such as over-parametrization for more parameters than data points . But parameter counting can be a poor description of model complexity , model flexibility , and inductive biases . One can easily construct degenerate cases , such as predictions being generated by a sum of parameters , where the number of parameters is divorced from the statistical properties of the model . When reasoning about generalization , over-parametrization is besides the point : what matters is how the parameters combine with the functional form of the model . Indeed , the practical success of convolutional neural networks ( CNNs ) for image recognition tasks is almost entirely about the inductive biases of convolutional filters , depth , and sparsity , for extracting local similarities and hierarchical representations , rather than flexibility ( LeCun et al. , 1989 ; Szegedy et al. , 2015 ) . Convolutional neural networks have far fewer parameters than fully connected networks , yet can provide much better generalization . Moreover , width can provide flexibility , but it is depth that has made neural networks distinctive in their generalization abilities . In this paper , we gain a number of insights into modern neural networks through the lens of effective dimensionality , in place of simple parameter counting . Effective dimensionality , defined by the eigenspectrum of the Hessian on the training loss ( equation 2 , Section 2 ) , was used by MacKay ( 1992a ) to measure how many directions in the parameter space had been determined in a Bayesian neural network . There is immense value in revisiting effective dimensionality in the context of modern deep learning . We demonstrate that effective dimensionality can be used to explain phenomena such as double descent and width-depth trade-offs in architecture specification ( Section 5 ) . We also show that effective dimensionality provides a straightforward , scalable , and promising metric for generalization in modern deep learning , comparing to PAC-Bayes flatness and path-norm measures , two of the most successful measures in the recent study by Jiang et al . ( 2020 ) ( Section 6 ) . We additionally show how effective dimension can explain why subspace compression methods for neural networks ( Izmailov et al. , 2019 ; Li et al. , 2018 ) can be so effective in practice , demonstrating function-space homogeneity as we move in directions given by eigenvectors corresponding to the smallest eigenvalues of the Hessian ( Section 4 ) . We connect this finding with Bayesian Occam factors and minimum description length frameworks , providing an interpretation of effective dimensionality as model compression ( Section 4.3 ) . Moreover , we show that despite a seeming lack of determination in parameter space , a neural network can be relatively well-determined in function space ( Section 4 ) . Consider Figure 1 , where we see that once a model has achieved low training loss , the effective dimensionality , computed from training data alone , replicates double descent behaviour for neural networks . Models that are wider have both lower effective dimensionality and better generalization . Alternatively , in Figure 2 we see that width and depth determine effective dimensionality in different ways , though both are related to numbers of parameters . Remarkably , for models with low training loss ( above the green partition ) , the effective dimensionality closely tracks generalization performance for each combination of width and depth . We also see that wide but shallow models overfit , while depth helps provide lower effective dimensionality . When two models have the same training loss they can be viewed as providing a compression of the training data at the same fidelity , in which case the model with the lower effective dimensionality , and thus provides the better compression ( Section 4.3 ) — capturing more regularities — will tend to generalize better . In particular , effective dimension should be used to compare models with similarly low values of training loss . In this regime , we see that ED closely tracks generalization for both double descent and width-depth trade-offs . 2 POSTERIOR CONTRACTION , EFFECTIVE DIMENSION , AND THE HESSIAN . We consider a model , typically a neural network , f ( x ; θ ) , with inputs x and parameters θ ∈ Rk . We define the Hessian as the k × k matrix of second derivatives of the loss , Hθ = −∇∇θL ( θ , D ) , where D is the training data . To begin , we describe posterior contraction , effective dimensionality , and connections to the Hessian . 2.1 THE HESSIAN AND THE POSTERIOR DISTRIBUTION We begin by providing a simple example explaining the relationship between the posterior distribution over the model ’ s parameter , the amount of posterior contraction from the prior , and the Hessian of the negative log posterior . Figure 3 shows the prior and posterior distribution for a Bayesian linear regression model with a single parameter , with predictions generated by parameters drawn from these distributions . As expected , we see that the variance of the posterior distribution is significantly reduced from that of the prior ; we call the difference of the variance between the posterior and the prior the posterior contraction of the model . More specifically , as shown in Figure 3 , the arrival of data increases the curvature of the loss ( negative log posterior ) at the optimum . This increase in curvature of the loss that accompanies certainty about the parameters leads to an increase in the eigenvalues of the Hessian of the Growth in eigenvalues of the Hessian of the loss corresponds to increased certainty about parameters . 2.2 POSTERIOR CONTRACTION AND EFFECTIVE DIMENSIONALITY . When combined with the functional form of a model , a distribution over parameters p ( θ ) induces a distribution over functions p ( f ( x ; θ ) ) . The parameters are of little direct interest — what matters for generalization is the distribution over functions ( e.g . the right panel of Figure 3 ) . As parameter distributions concentrate around specific values we expect to generate less diverse functions , the behavior seen in Figure 3 . We show in Appendix E that we can describe posterior contraction in Bayesian linear regression , y ∼ N ( Φβ , σ2I ) , with isotropic Gaussian prior , β ∼ N ( 0 , α2Ik ) , as ∆post ( Φ ) = α 2 N∑ i=1 λi λi + α−2 , ( 1 ) where λi are the eigenvalues of Φ > Φ/α2 , the Hessian of the log likelihood , which is dependent on both the model and the training data . We refer to the summation in equation 1 as the effective dimensionality of Φ > Φ/α2 . We generalize equation 1 , defining the effective dimensionality of a symmetric matrixA ∈ Rk×k as Neff ( A , z ) = k∑ i=1 λi λi + z , ( 2 ) in which λi are the eigenvalues ofA and z > 0 is a regularization constant MacKay ( 1992a ) .1 Typically as neural networks are trained we observe a gap in the eigenspectrum of the Hessian of the loss ( Sagun et al. , 2017 ) ; a small number of eigenvalues become large while the rest are near zero . In computing effective dimensionality , eigenvalues much larger than z contribute a value of approximately one to the summation , and eigenvalues much smaller than z contribute a value of approximately zero . 1In discussing model generalization , we use effective dimensionality as short form for the effective dimensionality of the Hessian of the loss of a trained model . Therefore , the effective dimensionality explains the number of parameters that have been determined by the data , which corresponds to the number of parameters the model is using to make predictions . In comparing models of the same parameterization that achieve low loss on the training data , we expect models with lower effective dimensionality to generalize better — which is empirically verified in Figures 1 and 2 . The intuition built using Figure 3 carries through to this approximation : as the eigenvalues of the Hessian increase , the eigenvalues of the covariance matrix in our approximation to the posterior distribution shrink , further indicating contraction around the MAP estimate . Practical Computations For large neural networks computing the eigenvalues and eigenvectors of the Hessian of the loss is nontrivial . We estimate effective dimensionality by computing the dominant eigenvalues using the Lanczos algorithm implemented in GPyTorch , since many of the eigenvalues are typically close to zero and do not significantly contribute to the estimate ( Gardner et al. , 2018 ) . Hessian vector products as implemented in PyTorch take three backward passes ; In our experiments , we compute 100 Hessian vector products to produce 100 eigenvalues , so that the cost is about 1.5× the cost of standard training 2 . As a heuristic , one can set z using the connection with the prior variance α2 and ` 2 regularization , or to measure the number of relatively large eigenvalues . We use a value of z = 1 in equation 2 for all experiments , and show in Figure 4 that effective dimensionality for model comparison is highly robust to different values of z over the range of networks with near zero training loss in Figure 2 . For neural networks , the Hessian can have negative eigenvalues ( e.g. , Sagun et al. , 2017 ; Ghorbani et al. , 2019 ) ; however , these negative eigenvalues are in practice extremely small in magnitude compared to the positive ones , and do not practically impact the computations of effective dimensionality . The Hessian ( and its effective dimensionality ) is not invariant to re-parameterizations ( e.g . ReLU rescaling and batch normalization ) ( MacKay , 2003 , Chapter 27 ) . For this reason we assume a fixed parameterization , as is the case in practice , and compare between models of the same parameterization . 3 RELATED WORK . MacKay ( 1992a ) used effective dimensionality to measure posterior contraction in Bayesian neural networks . Effective dimensionality has also been used for measuring generalization error of kernel methods ( Caponnetto and Vito , 2007 ) . Various connections between flatness and generalization have been explored via Occam factors ( MacKay , 2003 ; Smith and Le , 2018 ) and minimum description length ( Hinton and Van Camp , 1993 ; Achille and Soatto , 2018 ) . Nakkiran et al . ( 2020 ) found generalization gains as neural networks become overparameterized , showing the double descent phenomenon ( e.g. , Belkin et al. , 2019a ; Nakkiran et al. , 2020 ) that occurs as the width increases in residual and convolutional neural networks . Flatness has also be considered in the PAC-Bayes literature ( e.g. , Dziugaite and Roy , 2017 ; Jiang et al. , 2020 ) , with Jiang et al . ( 2020 ) showing that PAC-Bayesian measures of flatness , in the sense of 2In Appendix I we provide an example of the insensitivity of effective dimensionality to the number of eigenvalues used . insensitivity to random perturbations , perform well relative to many generalization bounds . Zhou et al . ( 2018 ) used PAC-Bayesian compression arguments to construct non-vacuous bounds . Our work shows that effective dimensionality can shed light on a number of phenomena in modern deep learning , including double descent , width-depth trade-offs , and Bayesian subspace inference , while providing a straightforward and compelling generalization metric , relative to several of the highest performing metrics in Jiang et al . ( 2020 ) . We provide an extended discussion of historical perspectives in Appendix D .
The paper applies the effective dimensionality (introduced by MacKay, Gull and others) to study the generalization properties of large probabilistic models. Effective dimensionality is the number of parameters determined by the data (derived from the curvature of the posterior at the MAP estimate), and shown to be more informative than simple parameter counting. After demonstrating the usefulness of the effective dimensionality, the authors study double descent observed when training deep nets of increasing width/depth. The authors argue that double descent is an artifact that can be understood by studying the effective dimensionality of the model. They take a detailed look at width-depth trade-offs using numerical experiments. Moreover, they compare the effective dimensionality with other generalization measures and find a superior performance.
SP:66d433dfb2512bdb004f50f94d38514636a89fc6
Interpretable Relational Representations for Food Ingredient Recommendation Systems
1 INTRODUCTION . Data mining and machine learning methods play an increasingly prominent role in food preference modeling , food ingredient pairing discovery and new recipe generation . Solving these tasks is nontrivial , since the goodness of ingredient combinations depends on many factors like taste , smell , cuisine , texture , and culture . Ahn et al . ( 2011 ) detected that the number of shared flavor molecules between ingredients is one of important factors for food pairing . They found Western cuisines show a tendency to use ingredient pairs that share many flavor compounds , while East Asian cuisines tend to avoid compound sharing ingredients . Using this idea , Garg et al . ( 2017 ) developed a rule-based food pairing system which ranks ingredients based on the number of shares of flavor molecules . Recently , Park et al . ( 2019 ) suggested a neural network approach based on flavor molecules and co-occurrence of ingredients in recipes . These approaches focus on one-to-one food pairing . There is also research related to many-to-one pairing . De Clercq et al . ( 2016 ) proposed the Recipe Completion Task which tries to identify matching ingredients for a partial list of ingredients ( the recipe ) using a Matrix Factorization based recommender system . Although efforts have been made to detect good ingredient combinations , there is no current Machine Learning method in this field that allows to interpret why suggested pairs are good . Our work is targeted at interpretable recommendation systems for food pairing and recipe completion . Given a set of pre-selected ingredients ( cardinality 1 or more ) by a user , the recommender suggests top-N ingredients from a set of candidates . For example , suppose a user selects apple and chocolate as the pre-selected ingredients , our recommender suggests some good paired ingredients ( e.g . cinnamon ) and also identifies reasons ( e.g . cinnamon is good for apple and chocolate in terms of their flavor affinity ) . For this , we propose the Interpretable Relational Representations Model ( IRRM ) in two variants to address food pairing and recipe completion tasks . The model features a key-value memory network ( Sukhbaatar et al . ( 2015 ) , Miller et al . ( 2016 ) ) to represent relationships of ingredients . One variant of the model is trained to learn latent relational representations over a trainable memory network ( Implicit Model ) . The other model can learn explainable relational representations over the pretrained memory network integrating an external knowledge base ( Explicit Model ) . The relational representations are interpretable and can be queried as to the reasons why the ingredients have been suggested . The Explicit model can integrate any number of constraints which can be decided manually based on the characteristics of the desired recommender system . Our contributions are as follows : 1 . We model ingredient pairing as a general recommendation task with implicit feedback . 2 . We introduce the Interpretable Relational Representations Model and it ’ s two variants : Implicit and Explicit . Both of which can learn pair specific relational representations ( vectors ) for one-to-one ( i.e . ingredient to ingredient ) and many-to-one ( ingredient-set to ingredient ) food pairing tasks . The relational vectors are also interpretable . 3 . We propose a training procedure to learn one-to-one and many-to-one relationships effectively using recipes . 4 . We evaluate our proposed models in the Recipe Completion Task and the Artificial Food pairing Task on the CulinaryDB and the Flavornet datasets . Our proposed approaches demonstrate competitive results on all datasets , outperforming many other baselines . 5 . We perform qualitative analysis . The results presents our proposed Explicit model is capable of unraveling hidden ingredients structures within recipes . 2 RELATED WORK . There are two related streams of work in recommender systems that are important for this paper : the session-based setting and the knowledge-aware systems . In the session-based setting , user profile can be constructed from past user behavior . A natural solution to this problem is the item-to-item recommendation approach.A variety of methods exist for this problem . For example , Quadrana et al . ( 2017 ) models the item sequence using RNNs , Kang & McAuley ( 2018 ) uses Self-Attention layers , and Wu et al . ( 2020 ) uses Transformer layers . While these methods mainly focus on how to encode item click-sequence interactions , we target good ingredient pairing using only ingredient attributes and the relationship between a ingredient set and an ingredient based on co-occurrence in recipes . For this we develop a new architecture integrating set encoders and relational memory with novel loss and score functions . There are also increasingly methods for integrating knowledge into recommenders . Zhang et al . ( 2016 ) and Cheng et al . ( 2016 ) directly incorporate user and item features as user profile into neural network models . Huang et al . ( 2018 ) and Wang & Cai ( 2020 ) integrate them using a pre-trained knowledge graph . These methods try to represent user context using external knowledge base , therefore , usually these knowledge embeddings are integrated to user embeddings . In this work , we incorporate knowledge specifically to detect relationships between an ingredient set and an ingredient for interpretation to improve recommendation performance . 3 PROBLEM DEFINITION . We first introduce the notations used throughout this paper . We model recipe completion as a recommendation scenario with implicit feedback ( Huang et al. , 2018 , Tay et al. , 2018 . In such scenarios , a user has interacted with an item and the system infers the item that user will interact next based on the interaction records of the user . We apply this to the food domain by using recipes as interaction records . Let I denote a set of ingredients and { i1 , . . . , iM } denote a pre-selected ingredient set , where i ∈ I is the ingredient and M is the number of ingredients . We call { i1 , . . . , iM } pre-selected ingredient set in this paper . Next , let Icandidate denotes a set of candidate ingredients . Icandidate depends on each pre-selected ingredient set , that is , Icandidate = I − { i1 , . . . , iM } . In addition , we assume that a knowledge base ( KB ) of ingredients is also available and the KB contains factors which are related to why some ingredients are good combinations . A KB is defined as a set of triplets over a entity set E and a relationship set L. A KB triplet 〈ei , l , ea〉 composed of two entities ei , ea ∈ E and a relationship l ∈ L , where ei is an ingredient ( e.i . ei ∈ I ) and l is an attribute and ea is the attribute value . For instance , 〈apple , flavorMolecule , ( - ) -Epicatechin〉 denotes that apple contains the ( - ) -Epicatechin flavor molecule . Based on these preliminaries , we define the food ingredient recommendation task . Given a preselected ingredient set { i1 , . . . , iM } and candidate ingredients Icandidate , we would like to infer the top-N ingredients from Icandidate . 4 RECOMMENDATIONS WITH MEMORY NETWORKS . In this section , we introduce the IRRM architectures . We start with the Implicit model that consists of a trainable key-value memory network . We then augment the Implicit model using a key-value memory network which integrates pre-trained entity and relationship vectors with ingredient attributes in the KBs – we call this extention the Explicit model . The overall architecture is described in Figure 1 . The input of our architecture are a pre-selected ingredient set and a candidate ingredient icandidate ∈ Icandidate . The output is a score . In inference , our recommender uses these scores to rank Icandidate . 4.1 INGREDIENT EMBEDDING LAYER AND INGREDIENT SET ENCODER . Ingredients are represented as one-hot encoding vectors ( corresponding to a unique index key belonging to each ingredient ) . At the embedding layer , this one-hot encoded vector is converted into a low-dimensional real-valued dense vector representation which is multiplied with the embedding matrices Q ∈ Rd×|I| – which stores ingredient embeddings . d is the dimensionality of the ingredient embeddings while |I| is the total number of ingredients . icandidate is converted to q using this embedding layer . On the other hand , pre-selected ingredients { i1 , . . . , iM } are encoded by the Ingredient Set Encoder ( Figure 6 ) . At first , each ingredient ij is converted to a vector using the Ingredient Embedding Layer ( same as icandidate ) . As a result , ij ∈ Rd vectors are generated . The sum of these vectors is converted to the ingredient set vector p using a feed-forward network with a single hidden layer , followed by Layer Normalization . 4.2 RELATION ENCODER . Tay et al . ( 2018 ) introduced LRAM ( Latent Relational Attentive Memory ) , in order , to generate latent relational vectors between user-item interactions . We expand this module by adding a residual connection , followed by Layer Normalization . This idea is inspired by Vaswani et al . ( 2017 ) . Given the pair of a pre-selected ingredient set vector and a candidate ingredient vector , 〈p , q〉 , the Relation Encoder first applies s = p+ q to generate the joint embedding of p and q . The generated vector s ∈ Rd is of the same dimension of p and q . Note we also tried other transfer functions here such as element-wise multiplication or just using a multi-layered perceptron MLP ( p , q ) . However , we found that addition performs best . This joint embedding s is used as the input of the memory network . The attention vector a ∈ Rd is a vector of importance weights over keys which are represented as the key matrix K = [ k1 , . . . , kN ] T ∈ RN×d , where N is the number of key-value pairs in the memory network and kj ∈ Rd is a key vector . Each element of the attention vector a can be defined as aj = sTkj , where aj ∈ R. In order to normalize the attention vector a to a probability distribution , we use the Softmax function : Softmax ( aj ) = exp ( aj ) ∑N n=1 exp ( an ) . We generate the vector m = ∑N n=1 Softmax ( an ) vn as the summation of weighted value vectors which are represented as the value matrix V = [ v1 , . . . , vN ] T ∈ RN×d . Finally , in order to generate the relational vector – r , m is added with the joint embedding s and Layer Normalization is applied as follows r = LayerNorm ( s+m ) . 4.2.1 THE EXPLICIT MODEL . In order to improve interpretability and predictive performance , we incorporate ingredient attribute information from a given KB into the memory network . Inspired by recent works which integrate a memory network with external memories ( Huang et al . ( 2018 ) ) , we propose the Explicit Relational Encoder . Instead of the trainable key matrix K and value matrix V , we pre-train vectors over a given KB . We then freeze key and value matrix for training the explicit model . Given a pair of a preselected ingredient set { i1 , . . . , iM } and a candidate ingredient icandidate , { i1 , . . . , iM , icandidate } is converted into the entity vectors using the KB embeddings which provide the entity vectors e ∈ RdKB and the relationship vectors l ∈ RdKB . Note that in case of dKB 6= d , we convert the joint embedding s ∈ Rd into s′ ∈ RdKB and the relational vector r ∈ RdKB into r′ ∈ Rd with linear projections . We use the TransE ( Bordes et al . ( 2013 ) ) for the KB embeddings . The reason for this choice is that given triplet 〈ei , latt , eiatt〉 , TransE can learn entity vectors and relationship vectors to follow eiatt = ei+ latt . KB relationships usually correspond to attribute types of entities , so we use the notation latt as the attribute type and eiatt as its value . Hence , we set the key matrix as follows : K = [ latt1 , . . . , lattN ] T ( 1 ) where N depends on the number of attribute types which you want to integrate and K is constant through training . The value matrix is initialized as follows : vattj = ∑ i∈ { i1 , ... , iM , icandidate } eiattj = ∑ i∈ { i1 , ... , iM , icandidate } ( ei + lattj ) ( 2 ) V = [ vatt1 , . . . , vattN ] T ( 3 ) There can be many one-to-multiple relations in the KB . For instance , an apple has multiple flavor molecules . Therefore , the entity vector eatt should not be an ingredient specific vector and we use ei + latt instead of using eatt .
This paper tackles ingredient recommender systems problem. This paper proposes the Interpretable Relational Representation Model (IRRM) to achieve both usefulness and interpretableness. There are two variants of the model, first is to model latent relation between two ingredients, the second is to leverage external knowledge base and results from TransE to learn relational representations.
SP:b663cb80c4f664393b5820a27e9ae004ef4ec413
Interpretable Relational Representations for Food Ingredient Recommendation Systems
1 INTRODUCTION . Data mining and machine learning methods play an increasingly prominent role in food preference modeling , food ingredient pairing discovery and new recipe generation . Solving these tasks is nontrivial , since the goodness of ingredient combinations depends on many factors like taste , smell , cuisine , texture , and culture . Ahn et al . ( 2011 ) detected that the number of shared flavor molecules between ingredients is one of important factors for food pairing . They found Western cuisines show a tendency to use ingredient pairs that share many flavor compounds , while East Asian cuisines tend to avoid compound sharing ingredients . Using this idea , Garg et al . ( 2017 ) developed a rule-based food pairing system which ranks ingredients based on the number of shares of flavor molecules . Recently , Park et al . ( 2019 ) suggested a neural network approach based on flavor molecules and co-occurrence of ingredients in recipes . These approaches focus on one-to-one food pairing . There is also research related to many-to-one pairing . De Clercq et al . ( 2016 ) proposed the Recipe Completion Task which tries to identify matching ingredients for a partial list of ingredients ( the recipe ) using a Matrix Factorization based recommender system . Although efforts have been made to detect good ingredient combinations , there is no current Machine Learning method in this field that allows to interpret why suggested pairs are good . Our work is targeted at interpretable recommendation systems for food pairing and recipe completion . Given a set of pre-selected ingredients ( cardinality 1 or more ) by a user , the recommender suggests top-N ingredients from a set of candidates . For example , suppose a user selects apple and chocolate as the pre-selected ingredients , our recommender suggests some good paired ingredients ( e.g . cinnamon ) and also identifies reasons ( e.g . cinnamon is good for apple and chocolate in terms of their flavor affinity ) . For this , we propose the Interpretable Relational Representations Model ( IRRM ) in two variants to address food pairing and recipe completion tasks . The model features a key-value memory network ( Sukhbaatar et al . ( 2015 ) , Miller et al . ( 2016 ) ) to represent relationships of ingredients . One variant of the model is trained to learn latent relational representations over a trainable memory network ( Implicit Model ) . The other model can learn explainable relational representations over the pretrained memory network integrating an external knowledge base ( Explicit Model ) . The relational representations are interpretable and can be queried as to the reasons why the ingredients have been suggested . The Explicit model can integrate any number of constraints which can be decided manually based on the characteristics of the desired recommender system . Our contributions are as follows : 1 . We model ingredient pairing as a general recommendation task with implicit feedback . 2 . We introduce the Interpretable Relational Representations Model and it ’ s two variants : Implicit and Explicit . Both of which can learn pair specific relational representations ( vectors ) for one-to-one ( i.e . ingredient to ingredient ) and many-to-one ( ingredient-set to ingredient ) food pairing tasks . The relational vectors are also interpretable . 3 . We propose a training procedure to learn one-to-one and many-to-one relationships effectively using recipes . 4 . We evaluate our proposed models in the Recipe Completion Task and the Artificial Food pairing Task on the CulinaryDB and the Flavornet datasets . Our proposed approaches demonstrate competitive results on all datasets , outperforming many other baselines . 5 . We perform qualitative analysis . The results presents our proposed Explicit model is capable of unraveling hidden ingredients structures within recipes . 2 RELATED WORK . There are two related streams of work in recommender systems that are important for this paper : the session-based setting and the knowledge-aware systems . In the session-based setting , user profile can be constructed from past user behavior . A natural solution to this problem is the item-to-item recommendation approach.A variety of methods exist for this problem . For example , Quadrana et al . ( 2017 ) models the item sequence using RNNs , Kang & McAuley ( 2018 ) uses Self-Attention layers , and Wu et al . ( 2020 ) uses Transformer layers . While these methods mainly focus on how to encode item click-sequence interactions , we target good ingredient pairing using only ingredient attributes and the relationship between a ingredient set and an ingredient based on co-occurrence in recipes . For this we develop a new architecture integrating set encoders and relational memory with novel loss and score functions . There are also increasingly methods for integrating knowledge into recommenders . Zhang et al . ( 2016 ) and Cheng et al . ( 2016 ) directly incorporate user and item features as user profile into neural network models . Huang et al . ( 2018 ) and Wang & Cai ( 2020 ) integrate them using a pre-trained knowledge graph . These methods try to represent user context using external knowledge base , therefore , usually these knowledge embeddings are integrated to user embeddings . In this work , we incorporate knowledge specifically to detect relationships between an ingredient set and an ingredient for interpretation to improve recommendation performance . 3 PROBLEM DEFINITION . We first introduce the notations used throughout this paper . We model recipe completion as a recommendation scenario with implicit feedback ( Huang et al. , 2018 , Tay et al. , 2018 . In such scenarios , a user has interacted with an item and the system infers the item that user will interact next based on the interaction records of the user . We apply this to the food domain by using recipes as interaction records . Let I denote a set of ingredients and { i1 , . . . , iM } denote a pre-selected ingredient set , where i ∈ I is the ingredient and M is the number of ingredients . We call { i1 , . . . , iM } pre-selected ingredient set in this paper . Next , let Icandidate denotes a set of candidate ingredients . Icandidate depends on each pre-selected ingredient set , that is , Icandidate = I − { i1 , . . . , iM } . In addition , we assume that a knowledge base ( KB ) of ingredients is also available and the KB contains factors which are related to why some ingredients are good combinations . A KB is defined as a set of triplets over a entity set E and a relationship set L. A KB triplet 〈ei , l , ea〉 composed of two entities ei , ea ∈ E and a relationship l ∈ L , where ei is an ingredient ( e.i . ei ∈ I ) and l is an attribute and ea is the attribute value . For instance , 〈apple , flavorMolecule , ( - ) -Epicatechin〉 denotes that apple contains the ( - ) -Epicatechin flavor molecule . Based on these preliminaries , we define the food ingredient recommendation task . Given a preselected ingredient set { i1 , . . . , iM } and candidate ingredients Icandidate , we would like to infer the top-N ingredients from Icandidate . 4 RECOMMENDATIONS WITH MEMORY NETWORKS . In this section , we introduce the IRRM architectures . We start with the Implicit model that consists of a trainable key-value memory network . We then augment the Implicit model using a key-value memory network which integrates pre-trained entity and relationship vectors with ingredient attributes in the KBs – we call this extention the Explicit model . The overall architecture is described in Figure 1 . The input of our architecture are a pre-selected ingredient set and a candidate ingredient icandidate ∈ Icandidate . The output is a score . In inference , our recommender uses these scores to rank Icandidate . 4.1 INGREDIENT EMBEDDING LAYER AND INGREDIENT SET ENCODER . Ingredients are represented as one-hot encoding vectors ( corresponding to a unique index key belonging to each ingredient ) . At the embedding layer , this one-hot encoded vector is converted into a low-dimensional real-valued dense vector representation which is multiplied with the embedding matrices Q ∈ Rd×|I| – which stores ingredient embeddings . d is the dimensionality of the ingredient embeddings while |I| is the total number of ingredients . icandidate is converted to q using this embedding layer . On the other hand , pre-selected ingredients { i1 , . . . , iM } are encoded by the Ingredient Set Encoder ( Figure 6 ) . At first , each ingredient ij is converted to a vector using the Ingredient Embedding Layer ( same as icandidate ) . As a result , ij ∈ Rd vectors are generated . The sum of these vectors is converted to the ingredient set vector p using a feed-forward network with a single hidden layer , followed by Layer Normalization . 4.2 RELATION ENCODER . Tay et al . ( 2018 ) introduced LRAM ( Latent Relational Attentive Memory ) , in order , to generate latent relational vectors between user-item interactions . We expand this module by adding a residual connection , followed by Layer Normalization . This idea is inspired by Vaswani et al . ( 2017 ) . Given the pair of a pre-selected ingredient set vector and a candidate ingredient vector , 〈p , q〉 , the Relation Encoder first applies s = p+ q to generate the joint embedding of p and q . The generated vector s ∈ Rd is of the same dimension of p and q . Note we also tried other transfer functions here such as element-wise multiplication or just using a multi-layered perceptron MLP ( p , q ) . However , we found that addition performs best . This joint embedding s is used as the input of the memory network . The attention vector a ∈ Rd is a vector of importance weights over keys which are represented as the key matrix K = [ k1 , . . . , kN ] T ∈ RN×d , where N is the number of key-value pairs in the memory network and kj ∈ Rd is a key vector . Each element of the attention vector a can be defined as aj = sTkj , where aj ∈ R. In order to normalize the attention vector a to a probability distribution , we use the Softmax function : Softmax ( aj ) = exp ( aj ) ∑N n=1 exp ( an ) . We generate the vector m = ∑N n=1 Softmax ( an ) vn as the summation of weighted value vectors which are represented as the value matrix V = [ v1 , . . . , vN ] T ∈ RN×d . Finally , in order to generate the relational vector – r , m is added with the joint embedding s and Layer Normalization is applied as follows r = LayerNorm ( s+m ) . 4.2.1 THE EXPLICIT MODEL . In order to improve interpretability and predictive performance , we incorporate ingredient attribute information from a given KB into the memory network . Inspired by recent works which integrate a memory network with external memories ( Huang et al . ( 2018 ) ) , we propose the Explicit Relational Encoder . Instead of the trainable key matrix K and value matrix V , we pre-train vectors over a given KB . We then freeze key and value matrix for training the explicit model . Given a pair of a preselected ingredient set { i1 , . . . , iM } and a candidate ingredient icandidate , { i1 , . . . , iM , icandidate } is converted into the entity vectors using the KB embeddings which provide the entity vectors e ∈ RdKB and the relationship vectors l ∈ RdKB . Note that in case of dKB 6= d , we convert the joint embedding s ∈ Rd into s′ ∈ RdKB and the relational vector r ∈ RdKB into r′ ∈ Rd with linear projections . We use the TransE ( Bordes et al . ( 2013 ) ) for the KB embeddings . The reason for this choice is that given triplet 〈ei , latt , eiatt〉 , TransE can learn entity vectors and relationship vectors to follow eiatt = ei+ latt . KB relationships usually correspond to attribute types of entities , so we use the notation latt as the attribute type and eiatt as its value . Hence , we set the key matrix as follows : K = [ latt1 , . . . , lattN ] T ( 1 ) where N depends on the number of attribute types which you want to integrate and K is constant through training . The value matrix is initialized as follows : vattj = ∑ i∈ { i1 , ... , iM , icandidate } eiattj = ∑ i∈ { i1 , ... , iM , icandidate } ( ei + lattj ) ( 2 ) V = [ vatt1 , . . . , vattN ] T ( 3 ) There can be many one-to-multiple relations in the KB . For instance , an apple has multiple flavor molecules . Therefore , the entity vector eatt should not be an ingredient specific vector and we use ei + latt instead of using eatt .
The paper studies a promising task of interpretable food ingredients recommendation - there has been a growing interest in modeling recipes. The idea of leveraging KG to improve the interpretability/faithfulness of recipe-related ML tasks seems like a contribution to the community. In particular, the author proposes a method to learn pair specific relational representations for one-to-one (i.e. ingredient to ingredient) and many-to-one (ingredient-set to ingredient) food pairing tasks.
SP:b663cb80c4f664393b5820a27e9ae004ef4ec413
PDE-Driven Spatiotemporal Disentanglement
1 INTRODUCTION . The interest of the machine learning community in physical phenomena has substantially grown for the last few years ( Shi et al. , 2015 ; Long et al. , 2018 ; Greydanus et al. , 2019 ) . In particular , an increasing amount of works studies the challenging problem of modeling the evolution of dynamical systems , with applications in sensible domains like climate or health science , making the understanding of physical phenomena a key challenge in machine learning . To this end , the community has successfully leveraged the formalism of dynamical systems and their associated differential formulation as powerful tools to specifically design efficient prediction models . In this work , we aim at studying this prediction problem with a principled and general approach , through the prism of Partial Differential Equations ( PDEs ) , with a focus on learning spatiotemporal disentangled representations . Prediction via spatiotemporal disentanglement was first studied in video prediction works , in order to separate static and dynamic information ( Denton & Birodkar , 2017 ) for prediction and interpretability purposes . Existing models are particularly complex , involving either adversarial losses or variational inference . Furthermore , their reliance on Recurrent Neural Networks ( RNNs ) hinders their ability to model spatiotemporal phenomena ( Yıldız et al. , 2019 ; Ayed et al. , 2020 ; Franceschi et al. , 2020 ) . Our proposition addresses these shortcomings with a simplified and improved model by grounding spatiotemporal disentanglement in the PDE formalism . Spatiotemporal phenomena obey physical laws such as the conservation of energy , that lead to describe the evolution of the system through PDEs . Practical examples include the conservation of energy for physical systems ( Hamilton , 1835 ) , or the equation describing constant illumination in a scene ( Horn & Schunck , 1981 ) for videos that has had a longstanding impact in computer vision with optical flow methods ( Dosovitskiy et al. , 2015 ; Finn et al. , 2016 ) . We propose to model the evolution of partially observed spatiotemporal phenomena with unknown dynamics by leveraging a formal method for the analytical resolution of PDEs : the functional separation of variables ( Miller , 1988 ) . Our framework formulates spatiotemporal disentanglement for prediction as learning a separable solution , where spatial and dynamic information are represented in separate variables . Besides offering a novel interpretation of spatiotemporal disentanglement , it confers simplicity and performance compared to existing methods : disentanglement is achieved through the sole combination of a prediction objective and regularization penalties , and the temporal dynamics is defined by a learned Ordinary Differential Equation ( ODE ) . We experimentally demonstrate the applicability , disentanglement capacity and ∗Equal contribution . forecasting performance of the proposed model on various spatiotemporal phenomena involving standard physical processes and synthetic video datasets against prior state-of-the-art models . 2 RELATED WORK . Our contribution deals with two main directions of research : spatiotemporal disentanglement and the coupling of neural networks and PDEs . Spatiotemporal disentanglement . Disentangling factors of variations is an essential representation learning problem ( Bengio et al. , 2013 ) . Its cardinal formulation for static data has been extensively studied , with state-of-the-art solutions ( Locatello et al. , 2019 ) being essentially based on Variational Autoencoders ( VAEs ; Kingma & Welling , 2014 ; Rezende et al. , 2014 ) . As for sequential data , several disentanglement notions have been formulated , ranging from distinguishing objects in a video ( Hsieh et al. , 2018 ; van Steenkiste et al. , 2018 ) to separating and modeling multi-scale dynamics ( Hsu et al. , 2017 ; Yingzhen & Mandt , 2018 ) . We focus in this work on the dissociation of the dynamics and visual aspects for spatiotemporal data . Even in this case , dissociation can take multiple forms . Examples in the video generation community include decoupling the foreground and background ( Vondrick et al. , 2016 ) , constructing structured frame representations ( Villegas et al. , 2017b ; Minderer et al. , 2019 ; Liu et al. , 2019 ) , extracting physical dynamics ( Le Guen & Thome , 2020 ) , or latent modeling of dynamics in a state-space manner ( Fraccaro et al. , 2017 ; Franceschi et al. , 2020 ) . Closer to our work , Denton & Birodkar ( 2017 ) , Villegas et al . ( 2017a ) and Hsieh et al . ( 2018 ) introduced in their video prediction models explicit latent disentanglement of static and dynamic information obtained using adversarial losses ( Goodfellow et al. , 2014 ) or VAEs . Disentanglement has also been introduced in more restrictive models relying on data-specific assumptions ( Kosiorek et al. , 2018 ; Jaques et al. , 2020 ) , and in video generation ( Tulyakov et al. , 2018 ) . We aim in this work at grounding and improving spatiotemporal disentanglement with more adapted inductive biases by introducing a paradigm leveraging the functional separation of variables resolution method for PDEs . Spatiotemporal prediction and PDE-based neural network models . An increasing number of works combining neural networks and differential equations for spatiotemporal forecasting have been produced for the last few years . Some of them show substantial improvements for the prediction of dynamical systems or videos compared to standard RNNs by defining the dynamics using learned ODEs ( Rubanova et al. , 2019 ; Yıldız et al. , 2019 ; Ayed et al. , 2020 ; Le Guen & Thome , 2020 ) , following Chen et al . ( 2018 ) , or adapting them to stochastic data ( Ryder et al. , 2018 ; Li et al. , 2020 ; Franceschi et al. , 2020 ) . Most PDE-based spatiotemporal models exploit some prior physical knowledge . It can induce the structure of the prediction function ( Brunton et al. , 2016 ; de Avila Belbute-Peres et al. , 2018 ) or specific cost functions , thereby improving model performances . For instance , de Bézenac et al . ( 2018 ) shape their prediction function with an advection-diffusion mechanism , and Long et al . ( 2018 ; 2019 ) estimate PDEs and their solutions by learning convolutional filters proven to approximate differential operators . Greydanus et al . ( 2019 ) , Chen et al . ( 2020 ) and Toth et al . ( 2020 ) introduce non-regression losses by taking advantage of Hamiltonian mechanics ( Hamilton , 1835 ) , while Tompson et al . ( 2017 ) and Raissi et al . ( 2020 ) combine physically inspired constraints and structural priors for fluid dynamic prediction . Our work deepens this literature by establishing a novel link between a resolution method for PDEs and spatiotemporal disentanglement , thereby introducing a data-agnostic model leveraging any static information in observed phenomena . 3 BACKGROUND : SEPARATION OF VARIABLES . Solving high-dimensional PDEs is a difficult analytical and numerical problem ( Bungartz & Griebel , 2004 ) . Variable separation aims at simplifying it by decomposing the solution , e.g. , as a simple combination of lower-dimensional functions , thus reducing the PDE to simpler differential equations . 3.1 SIMPLE CASE STUDY . Let us introduce this technique through a standard application , with proofs in Appendix A.1 , on the one-dimensional heat diffusion problem ( Fourier , 1822 ) , consisting in a bar of length L , whose temperature at time t and position x is denoted by u ( x , t ) and satisfies : ∂u ∂t = c2 ∂2u ∂x2 , u ( 0 , t ) = u ( L , t ) = 0 , u ( x , 0 ) = f ( x ) . ( 1 ) Suppose that a solution u is product-separable , i.e. , it can be decomposed as : u ( x , t ) = u1 ( x ) · u2 ( t ) . Combined with Equation ( 1 ) , it leads to c2u′′1 ( x ) /u1 ( x ) = u ′ 2 ( t ) /u2 ( t ) . The left- and right-hand sides of this equation are respectively independent from t and x . Therefore , both sides are constant , and solving both resulting ODEs gives solutions of the form , with µ ∈ R and n ∈ N : u ( x , t ) = µ sin ( nπx/L ) × exp ( − ( cnπ/L ) 2 t ) . ( 2 ) The superposition principle and the uniqueness of solutions under smoothness constraints allow then to build the set of solutions of Equation ( 1 ) with linear combinations of separable solutions ( Le Dret & Lucquin , 2016 ) . Besides this simple example , separation of variables can be more elaborate . 3.2 FUNCTIONAL SEPARATION OF VARIABLES . The functional separation of variables ( Miller , 1988 ) generalizes this method . Let u be a function obeying a given arbitrary PDE . The functional variable separation method amounts to finding a parameterization z , a functional U , an entangling function ξ , and representations φ and ψ such that : z = ξ ( φ ( x ) , ψ ( t ) ) , u ( x , t ) = U ( z ) . ( 3 ) Trivial choices ξ = u and identity function as U , φ and ψ ensure the validity of this reformulation . Finding suitable φ , ψ , U , and ξ with regards to the initial PDE can facilitate its resolution by inducing separate simpler PDEs on φ , ψ , and U . For instance , product-separability is retrieved with U = exp . General results on the existence of separable solutions have been proven ( Miller , 1983 ) , though their uniqueness depends on the initial conditions and the choice of functional separation ( Polyanin , 2020 ) . Functional separation of variables finds broad applications . It helps to solve refinements of the heat equation , such as generalizations with an advection term ( see Appendix A.2 ) or with complex diffusion and source terms forming a general transport equation ( Jia et al. , 2008 ) . Besides the heat equation , functional separation of PDEs is also applicable in various physics fields like reactiondiffusion with non-linear sources or convection-diffusion phenomena ( Polyanin , 2019 ; Polyanin & Zhurov , 2020 ) , Hamiltonian physics ( Benenti , 1997 ) , or even general relativity ( Kalnins et al. , 1992 ) . Reparameterizations such as Equation ( 3 ) implement a separation of spatial and temporal factors of variations , i.e. , spatiotemporal disentanglement . We introduce in the following a learning framework based on this general method . 4 PROPOSED METHOD . We propose to model spatiotemporal phenomena using the functional variable separation formalism . We first describe our notations and then derive a principled model and constraints from this method . 4.1 PROBLEM FORMULATION THROUGH SEPARATION OF VARIABLES . We consider a distribution P of observed spatiotemporal trajectories and corresponding observation samples v = ( vt0 , vt0+∆t , . . . , vt1 ) , with vt ∈ V ⊆ Rm and t1 = t0 + ν∆t . Each sequence v ∼ P corresponds to an observation of a dynamical phenomenon , assumed to be described by a hidden functional uv ( also denoted by u for the sake of simplicity ) of space coordinates x ∈ X ⊆ Rs and time t ∈ R that characterizes the trajectories . More precisely , uv describes an unobserved continuous dynamics and v corresponds to instantaneous discrete spatial measurements associated to this dynamics . Therefore , we consider that vt results from a time-independent function ζ of the mapping uv ( · , t ) . For example , v might consist in temperatures measured at some points of the sea surface , while uv would be the complete ocean circulation model . In other words , v provides a partial information about uv and is a projection of the full dynamics . We seek to learn a model which , when conditioned on prior observations , can predict future observations . To this end , we posit that the state u of each observed trajectory v is driven by a hidden PDE , shared among all trajectories ; we discuss this assumption in details in Appendix C.1 . Learning such a PDE and its solutions would then allow us to model observed trajectories v. However , directly learning solutions to high-dimensional unknown PDEs is a complex task ( Bungartz & Griebel , 2004 ; Sirignano & Spiliopoulos , 2018 ) . We aim in this work at simplifying this resolution . We propose to do so by relying on the functional separation of variables of Equation ( 3 ) , in order to leverage a potential separability of the hidden PDE . Therefore , analogously to Equation ( 3 ) , we propose to formulate the problem as learning observation-constrained φ , ψ and U , as well as ξ and ζ , such that : z = ξ ( φ ( x ) , ψ ( t ) ) , u ( x , t ) = U ( z ) , vt = ζ ( u ( · , t ) ) , ( 4 ) with φ and ψ allowing to disentangle the prediction problem . In the formalism of the functional separation of variables , this amounts to decomposing the full solution u , thereby learning a spatial PDE on φ , a temporal ODE on ψ , and a PDE on U , as well as their respective solutions .
The paper presents a spatiotemporal disentanglement method for handling sequence data. Solving high-dimensional PDEs, for deriving the exact dynamics, is difficult; hence, this work proposes learning time-invariant and time-dependent representations separately to solve this problem. To achieve this goal, the authors devised a model that incorporates a temporal ODE process. The provided experiments indicate that the method achieves good performance, although the gain is not consistent.
SP:fdee0ede48073d769467a45af029bf5c798ab6ce
PDE-Driven Spatiotemporal Disentanglement
1 INTRODUCTION . The interest of the machine learning community in physical phenomena has substantially grown for the last few years ( Shi et al. , 2015 ; Long et al. , 2018 ; Greydanus et al. , 2019 ) . In particular , an increasing amount of works studies the challenging problem of modeling the evolution of dynamical systems , with applications in sensible domains like climate or health science , making the understanding of physical phenomena a key challenge in machine learning . To this end , the community has successfully leveraged the formalism of dynamical systems and their associated differential formulation as powerful tools to specifically design efficient prediction models . In this work , we aim at studying this prediction problem with a principled and general approach , through the prism of Partial Differential Equations ( PDEs ) , with a focus on learning spatiotemporal disentangled representations . Prediction via spatiotemporal disentanglement was first studied in video prediction works , in order to separate static and dynamic information ( Denton & Birodkar , 2017 ) for prediction and interpretability purposes . Existing models are particularly complex , involving either adversarial losses or variational inference . Furthermore , their reliance on Recurrent Neural Networks ( RNNs ) hinders their ability to model spatiotemporal phenomena ( Yıldız et al. , 2019 ; Ayed et al. , 2020 ; Franceschi et al. , 2020 ) . Our proposition addresses these shortcomings with a simplified and improved model by grounding spatiotemporal disentanglement in the PDE formalism . Spatiotemporal phenomena obey physical laws such as the conservation of energy , that lead to describe the evolution of the system through PDEs . Practical examples include the conservation of energy for physical systems ( Hamilton , 1835 ) , or the equation describing constant illumination in a scene ( Horn & Schunck , 1981 ) for videos that has had a longstanding impact in computer vision with optical flow methods ( Dosovitskiy et al. , 2015 ; Finn et al. , 2016 ) . We propose to model the evolution of partially observed spatiotemporal phenomena with unknown dynamics by leveraging a formal method for the analytical resolution of PDEs : the functional separation of variables ( Miller , 1988 ) . Our framework formulates spatiotemporal disentanglement for prediction as learning a separable solution , where spatial and dynamic information are represented in separate variables . Besides offering a novel interpretation of spatiotemporal disentanglement , it confers simplicity and performance compared to existing methods : disentanglement is achieved through the sole combination of a prediction objective and regularization penalties , and the temporal dynamics is defined by a learned Ordinary Differential Equation ( ODE ) . We experimentally demonstrate the applicability , disentanglement capacity and ∗Equal contribution . forecasting performance of the proposed model on various spatiotemporal phenomena involving standard physical processes and synthetic video datasets against prior state-of-the-art models . 2 RELATED WORK . Our contribution deals with two main directions of research : spatiotemporal disentanglement and the coupling of neural networks and PDEs . Spatiotemporal disentanglement . Disentangling factors of variations is an essential representation learning problem ( Bengio et al. , 2013 ) . Its cardinal formulation for static data has been extensively studied , with state-of-the-art solutions ( Locatello et al. , 2019 ) being essentially based on Variational Autoencoders ( VAEs ; Kingma & Welling , 2014 ; Rezende et al. , 2014 ) . As for sequential data , several disentanglement notions have been formulated , ranging from distinguishing objects in a video ( Hsieh et al. , 2018 ; van Steenkiste et al. , 2018 ) to separating and modeling multi-scale dynamics ( Hsu et al. , 2017 ; Yingzhen & Mandt , 2018 ) . We focus in this work on the dissociation of the dynamics and visual aspects for spatiotemporal data . Even in this case , dissociation can take multiple forms . Examples in the video generation community include decoupling the foreground and background ( Vondrick et al. , 2016 ) , constructing structured frame representations ( Villegas et al. , 2017b ; Minderer et al. , 2019 ; Liu et al. , 2019 ) , extracting physical dynamics ( Le Guen & Thome , 2020 ) , or latent modeling of dynamics in a state-space manner ( Fraccaro et al. , 2017 ; Franceschi et al. , 2020 ) . Closer to our work , Denton & Birodkar ( 2017 ) , Villegas et al . ( 2017a ) and Hsieh et al . ( 2018 ) introduced in their video prediction models explicit latent disentanglement of static and dynamic information obtained using adversarial losses ( Goodfellow et al. , 2014 ) or VAEs . Disentanglement has also been introduced in more restrictive models relying on data-specific assumptions ( Kosiorek et al. , 2018 ; Jaques et al. , 2020 ) , and in video generation ( Tulyakov et al. , 2018 ) . We aim in this work at grounding and improving spatiotemporal disentanglement with more adapted inductive biases by introducing a paradigm leveraging the functional separation of variables resolution method for PDEs . Spatiotemporal prediction and PDE-based neural network models . An increasing number of works combining neural networks and differential equations for spatiotemporal forecasting have been produced for the last few years . Some of them show substantial improvements for the prediction of dynamical systems or videos compared to standard RNNs by defining the dynamics using learned ODEs ( Rubanova et al. , 2019 ; Yıldız et al. , 2019 ; Ayed et al. , 2020 ; Le Guen & Thome , 2020 ) , following Chen et al . ( 2018 ) , or adapting them to stochastic data ( Ryder et al. , 2018 ; Li et al. , 2020 ; Franceschi et al. , 2020 ) . Most PDE-based spatiotemporal models exploit some prior physical knowledge . It can induce the structure of the prediction function ( Brunton et al. , 2016 ; de Avila Belbute-Peres et al. , 2018 ) or specific cost functions , thereby improving model performances . For instance , de Bézenac et al . ( 2018 ) shape their prediction function with an advection-diffusion mechanism , and Long et al . ( 2018 ; 2019 ) estimate PDEs and their solutions by learning convolutional filters proven to approximate differential operators . Greydanus et al . ( 2019 ) , Chen et al . ( 2020 ) and Toth et al . ( 2020 ) introduce non-regression losses by taking advantage of Hamiltonian mechanics ( Hamilton , 1835 ) , while Tompson et al . ( 2017 ) and Raissi et al . ( 2020 ) combine physically inspired constraints and structural priors for fluid dynamic prediction . Our work deepens this literature by establishing a novel link between a resolution method for PDEs and spatiotemporal disentanglement , thereby introducing a data-agnostic model leveraging any static information in observed phenomena . 3 BACKGROUND : SEPARATION OF VARIABLES . Solving high-dimensional PDEs is a difficult analytical and numerical problem ( Bungartz & Griebel , 2004 ) . Variable separation aims at simplifying it by decomposing the solution , e.g. , as a simple combination of lower-dimensional functions , thus reducing the PDE to simpler differential equations . 3.1 SIMPLE CASE STUDY . Let us introduce this technique through a standard application , with proofs in Appendix A.1 , on the one-dimensional heat diffusion problem ( Fourier , 1822 ) , consisting in a bar of length L , whose temperature at time t and position x is denoted by u ( x , t ) and satisfies : ∂u ∂t = c2 ∂2u ∂x2 , u ( 0 , t ) = u ( L , t ) = 0 , u ( x , 0 ) = f ( x ) . ( 1 ) Suppose that a solution u is product-separable , i.e. , it can be decomposed as : u ( x , t ) = u1 ( x ) · u2 ( t ) . Combined with Equation ( 1 ) , it leads to c2u′′1 ( x ) /u1 ( x ) = u ′ 2 ( t ) /u2 ( t ) . The left- and right-hand sides of this equation are respectively independent from t and x . Therefore , both sides are constant , and solving both resulting ODEs gives solutions of the form , with µ ∈ R and n ∈ N : u ( x , t ) = µ sin ( nπx/L ) × exp ( − ( cnπ/L ) 2 t ) . ( 2 ) The superposition principle and the uniqueness of solutions under smoothness constraints allow then to build the set of solutions of Equation ( 1 ) with linear combinations of separable solutions ( Le Dret & Lucquin , 2016 ) . Besides this simple example , separation of variables can be more elaborate . 3.2 FUNCTIONAL SEPARATION OF VARIABLES . The functional separation of variables ( Miller , 1988 ) generalizes this method . Let u be a function obeying a given arbitrary PDE . The functional variable separation method amounts to finding a parameterization z , a functional U , an entangling function ξ , and representations φ and ψ such that : z = ξ ( φ ( x ) , ψ ( t ) ) , u ( x , t ) = U ( z ) . ( 3 ) Trivial choices ξ = u and identity function as U , φ and ψ ensure the validity of this reformulation . Finding suitable φ , ψ , U , and ξ with regards to the initial PDE can facilitate its resolution by inducing separate simpler PDEs on φ , ψ , and U . For instance , product-separability is retrieved with U = exp . General results on the existence of separable solutions have been proven ( Miller , 1983 ) , though their uniqueness depends on the initial conditions and the choice of functional separation ( Polyanin , 2020 ) . Functional separation of variables finds broad applications . It helps to solve refinements of the heat equation , such as generalizations with an advection term ( see Appendix A.2 ) or with complex diffusion and source terms forming a general transport equation ( Jia et al. , 2008 ) . Besides the heat equation , functional separation of PDEs is also applicable in various physics fields like reactiondiffusion with non-linear sources or convection-diffusion phenomena ( Polyanin , 2019 ; Polyanin & Zhurov , 2020 ) , Hamiltonian physics ( Benenti , 1997 ) , or even general relativity ( Kalnins et al. , 1992 ) . Reparameterizations such as Equation ( 3 ) implement a separation of spatial and temporal factors of variations , i.e. , spatiotemporal disentanglement . We introduce in the following a learning framework based on this general method . 4 PROPOSED METHOD . We propose to model spatiotemporal phenomena using the functional variable separation formalism . We first describe our notations and then derive a principled model and constraints from this method . 4.1 PROBLEM FORMULATION THROUGH SEPARATION OF VARIABLES . We consider a distribution P of observed spatiotemporal trajectories and corresponding observation samples v = ( vt0 , vt0+∆t , . . . , vt1 ) , with vt ∈ V ⊆ Rm and t1 = t0 + ν∆t . Each sequence v ∼ P corresponds to an observation of a dynamical phenomenon , assumed to be described by a hidden functional uv ( also denoted by u for the sake of simplicity ) of space coordinates x ∈ X ⊆ Rs and time t ∈ R that characterizes the trajectories . More precisely , uv describes an unobserved continuous dynamics and v corresponds to instantaneous discrete spatial measurements associated to this dynamics . Therefore , we consider that vt results from a time-independent function ζ of the mapping uv ( · , t ) . For example , v might consist in temperatures measured at some points of the sea surface , while uv would be the complete ocean circulation model . In other words , v provides a partial information about uv and is a projection of the full dynamics . We seek to learn a model which , when conditioned on prior observations , can predict future observations . To this end , we posit that the state u of each observed trajectory v is driven by a hidden PDE , shared among all trajectories ; we discuss this assumption in details in Appendix C.1 . Learning such a PDE and its solutions would then allow us to model observed trajectories v. However , directly learning solutions to high-dimensional unknown PDEs is a complex task ( Bungartz & Griebel , 2004 ; Sirignano & Spiliopoulos , 2018 ) . We aim in this work at simplifying this resolution . We propose to do so by relying on the functional separation of variables of Equation ( 3 ) , in order to leverage a potential separability of the hidden PDE . Therefore , analogously to Equation ( 3 ) , we propose to formulate the problem as learning observation-constrained φ , ψ and U , as well as ξ and ζ , such that : z = ξ ( φ ( x ) , ψ ( t ) ) , u ( x , t ) = U ( z ) , vt = ζ ( u ( · , t ) ) , ( 4 ) with φ and ψ allowing to disentangle the prediction problem . In the formalism of the functional separation of variables , this amounts to decomposing the full solution u , thereby learning a spatial PDE on φ , a temporal ODE on ψ , and a PDE on U , as well as their respective solutions .
The authors present a generative model for videos where the latent trajectories have two components - a term without a slowness loss that represents "content" and a term with a slowness loss that represents "style". They present results on a dataset simulating the wave equation and on videos of moving MNIST digits and 3D chairs. The results are generally good, especially for long roll-outs, and they demonstrate something like disentangling by showing that the identities of the digits can be swapped in the moving MNIST data.
SP:fdee0ede48073d769467a45af029bf5c798ab6ce
Dual Graph Complementary Network
As a powerful representation learning method on graph data , graph neural networks ( GNNs ) have shown great popularity in tackling graph analytic problems . Although many attempts have been made in literatures to find strategies about extracting better embedding of the target nodes , few of them consider this issue from a comprehensive perspective . Most of current GNNs usually employ some single method which can commendably extract a certain kind of feature but some equally important features are often ignored . In this paper , we develop a novel dual graph complementary network ( DGCN ) to learn representation complementarily . We use two different branches , and inputs of the two branches are the same , which are composed of structure and feature information . At the same time , there is also a complementary relationship between the two branches . Beyond that , our extensive experiments show that DGCN outperforms state-of-the-art methods on five public benchmark datasets . 1 INTRODUCTION . Although many attempts have been made in literatures to find a better strategy to learn the target node representation , the feature extraction capabilities of most methods are still far from optimal , especially when only a small amount of data is labeled . However , in fact , compared with the expensive and laborious acquisition of labeled data , unlabeled data is much easier to obtain . Therefore , how to learn more useful representations with limited label information is the key direct of representation learning study . Methods of this issue , commonly referred to as semi-supervised learning , which essentially believe that the similar points have similar outputs . Thus , it can properly utilize the consistency of data to make full use of the rich information of unsupervised data . In the real world , it is common that we have data with specific topological structures which usually called graph data . The graph structure is usually expressed as the connection between nodes . By aggregating the features of neighborhood and performing appropriate linear transformation , graph neural networks ( GNNs ) can convert graph data into a low-dimensional , compact , and continuous feature space . Nevertheless , most of them only care about a single aggregation strategy , which is counter intuitive : for example , as far as social networks are concerned , the relationship between people is very complex , while , most of the traditional GNNs only consider the single connection between nodes and ignore other implicit information . In this paper , our work focuses on learning node representations by GNNs in a semi-supervised way . Despite there are already many graph-based semi-supervised learning methods ( Kipf & Welling , 2016 ; Yang et al. , 2016 ; Khan & Blumenstock , 2019 ) , most of them can only find a single relationship between nodes . As a result , some information in unsupervised data is usually ignored . To overcome this problem , we develop a novel dual graph complementary network ( DGCN ) to extract information from both feature and topology spaces . An intuition of our method is to learn based on disagreement : network performance is largely related to the quality of the graph , which usually emphasizes the relevance of an attribute of instances . So , since we don ’ t know what attributes are most important , we consider both of them in the model design . Compared with the traditional GNN-based methods , we perform two different aggregate strategies which emphasize different attributes in each branch , one from the perspective of node feature , and the other from the topological structure . Then , to further utilize implicit information , we employ two networks with different structures to extract embedding from input feature . By doing so , nodes ’ information can be propagated in different ways . Then , the supervised loss ℓsup and diversity constraint ℓdiv are used to guide the training . We use two different branches to extract common information in topology and feature spaces . By utilizing disagreements between the two branches , model can gain information that may be ignored by single branch . To prove the effectiveness of our method , we conducted experiments on five public benchmark datasets . The contributions of our work are summarized as follows : • We propose a novel dual graph complementary network ( DGCN ) to fuse complementary information , which utilizes different graphs to aggregate nodes that are similar in certain attributes in a complementary way . • By comparing with algorithms that use non-single graphs , it proves that our complementary architecture can extract richer information • Through extensive evaluation on multiple datasets , we demonstrate DGCN effectiveness over state-of-the-art baselines . 2 RELATED WORK . 2.1 SEMI-SUPERVISED LEARNING . Semi-supervised learning is usually aimed at the case of insufficient data labels . X ∈ Rn×d is the feature of input nodes . Y = [ yij ] ∈ Rn×k is the label matrix , where k is the class number . yij means that the i-th node belongs to the j-th class . Then split data points into labeled and unlabeled points . Accordingly , xL and xU express a feature of labeled and unlabeled instance , respectively . Moreover , the ground-truth label of the label nodes is available only . The main objective of semi-supervised learning is to extract supervised information from labeled dataset whilst adequately utilizing data distribution information contained in X . There are four categories of semi-supervised learning algorithms : 1 . Self-training semi-supervised learning ( Lee , 2013 ) : It utilizes high-confidence pseudo labels to expand label set . Ideally , it can continuously improve network performance , but is usually limited by the quality of pseudo labels . 2 . Graph-based semi-supervised learning : It propagates information between instances according to edges in graph . It ’ s an inductive learning method , of which the performance mainly depends on the aggregation algorithm . 3 . Low-density separation methods ( Joachims , 1999 ) : They assume that the decision hyperplane is consistent with the data distribution , so so it should pass through the sparse region of the data . 4 . Pretrain semi-supervised learning : such as autoencoder ( Vincent et al. , 2008 ; Rifai et al. , 2011 ) , trains the model based on reconstruction error and then fine tune it using labeled data . However , semi-supervised learning tasks prefer to obtain information related to data distribution rather than all information of samples . In this paper , we mainly focus on the graph-based semisupervised learning . 2.2 GRAPH-BASED SEMI-SUPERVISED LEARNING . In addition to features , graph-based semi-supervised learning methods ( Kipf & Welling , 2016 ) represent the topological edge connection between different instances . For many datasets , graph is given as a feature . If the features of the dataset do not contain the relationships between different samples , a graph can also be constructed by measuring the similarity between the features of the instances ( Zhu et al. , 2003 ) . Actually , the graph is a measure of whether the instances are closely connected . Then , according to this graph , information exchange between instances can be carried out , so that the information of unlabeled data can be effectively utilized . Network performance is largely related to the quality of the graph . When the attributes emphasized in the graph do not match the expectations of the task objective , misjudgments are often caused . Usually , it is difficult to finding what really matters . The traditional graph-based semi-supervised learning methods usually uses a single graph for node aggregation , which causes a single attribute to be emphatically considered , but when this attribute does not match the task goal , it will mislead the training instead . 3 DGCN ARCHITECTURE . In this section , we will present the overall framework of DGCN , see Fig . 1 . The main idea of DGCN is that information exchange under the control of graphs emphasizing different attributes can extract more abundant features . To this end , we use two branches to extract information from two inputs at the same time . The node features of these two inputs are the same , the only difference is the graphs that control the information exchange . In addition , in order to further expand the difference between branches , we use a diversity loss ℓdiv . 2 , l , H gat 1 , l and Hgat2 , l respectively . Then , we fuse GCN view and GAT view respectively to obtain H gcn c and H gat c respectively through attention operation . The obtained Hgcnc and H gat c are sent to the final attention layer together with the previous Hgcn1 , l , H gcn 2 , l , H gat 1 , l and H gat 2 , l . 3.1 NOTATION & PROBLEM STATEMENT . Let G = ( V , A , X ) be an undirected graph . V is the set of nodes on the graph , which is composed of unlabeled ( Vu ) and labeled ( Vl ) nodes with the number of nodes is nu and nl respectively . n = nl + nu is the number of nodes . A = [ aij ] ∈ Rn×n is the adjacency matrix . aij = 1 represents that node i and node j are closely related in an attribute , otherwise , aij = 0 . 3.2 BRANCHES . In order to capture different characteristics by the two branches ( also called viewer ) , we use different network structures for each branch : GCN ( Kipf & Welling , 2016 ) and GAT ( Veličković et al. , 2017 ) . Given a graph G = ( V , A , X ) , both GCN and GAT intend to extract richer features at a vertex by aggregating features of vertices from its neighborhood ( Li et al. , 2019 ) . So the node representation of the l-th layer Hl can be defined by : Hl = Update ( Aggregate ( Hl−1 , Θ agg l ) , Θ update l ) . ( 1 ) where Θaggl and Θ update l are the learnable weights of aggregation and update functions of the l-th layer respectively and the initial H0 = X . The aggregation and update functions are the essential components of GNNs , and obviously the features extracted by different aggregation functions will have certain differences . Thus , we take advantage of two different networks , GCN and GAT , to obtain node representation . The node features output by the l-th GCN layer can be expressed as : Hl = σ ( ( D̃− 1 2 ( A+ I ) D̃− 1 2 ) Hl−1Wl ) . ( 2 ) where I ∈ Rn×n indicates the identity matrix , A+ I means adding self-loop in the graph , D̃ is the diagonal degree matrix of A+ I , and σ ( · ) is the activation function . It can be seen from equation 2 that GCN aggregates neighbor features by weighting the value of symmetric normalized laplacian . Next , we introduce the algorithm GAT that uses the attention mechanism to calculate the neighbor weight . Through a learnable coefficient a , GAT can assign learnable weights to each neighbor of the node . For node i , the weight αij between it and its neighbor node j can be expressed as : αij = exp ( LeakyReLU ( a⊤ [ Whi∥Whj ] ) ) ∑ k∈Ni exp ( LeakyReLU ( a⊤ [ Whi∥Whk ] ) ) . ( 3 ) where ·⊤ is the transposition operation and ∥ represents concatenation . Then the forward propagation process of node v in l-th layer can be represented as : hl , i = ∥Mm=1σ ∑ j∈Ni αml , ijW m l hl−1 , j . ( 4 ) where , hl , i is the embeding of node i in the l-th layer . M is the number of independent attention mechanisms . σ is activation function of GAT . αmij is the normalized attention coefficients computed by the m-th attention mechanism , see equation 3 . As can be seen from equation 4 , the weights GAT assigns to a node ’ s neighbors are learnable . Thus we can assign adaptive weights to different neighbors . Although these two methods are based on the existence of connection between points as the premise of aggregation . Both the GCN and GAT models we use have their own advantages and disadvantages . The former considers the relationship between nodes ( probability conduction matrix ) , but can ’ t learn neighbor weights dynamically . Although the latter can assign dynamic weights to neighbors , it ignores the influence of degree attribute of node on aggregation . Therefore , using these two branches , we can extract more complementary features from the input .
This paper introduces a method on semi-supervised graph classification. For each graph, the method first constructs another view based on the cosine similarity between nodes' features, and from the two views (topology and feature similarity), GCN and GAT are applied to extract representations. All node representations are further combined via two layers of attentions. A diversity loss that encourages dissimilarity between the learned representations of GCN and GAT is introduced to the cross-entropy loss for joint optimization. The whole framework makes sense in terms of learning meaningful node representations for classification. However, the method lacks novelty, it is an incremental development on the existing graph neural networks. The choice of GCN and GAT as the building blocks are not well justified. It is also possible to try other kinds of GNNs. The statement on GAT "ignores the inherent structure of the graph space" on page 4 is confusing since it learns weights based on the graph structures. The experimental results show the better performance of the proposed method, but are not well analyzed. It may be better to compare with other multi-graph methods such as
SP:d1f2bfa043d6ce88a18bbef8a0e694e92ab43d2e
Dual Graph Complementary Network
As a powerful representation learning method on graph data , graph neural networks ( GNNs ) have shown great popularity in tackling graph analytic problems . Although many attempts have been made in literatures to find strategies about extracting better embedding of the target nodes , few of them consider this issue from a comprehensive perspective . Most of current GNNs usually employ some single method which can commendably extract a certain kind of feature but some equally important features are often ignored . In this paper , we develop a novel dual graph complementary network ( DGCN ) to learn representation complementarily . We use two different branches , and inputs of the two branches are the same , which are composed of structure and feature information . At the same time , there is also a complementary relationship between the two branches . Beyond that , our extensive experiments show that DGCN outperforms state-of-the-art methods on five public benchmark datasets . 1 INTRODUCTION . Although many attempts have been made in literatures to find a better strategy to learn the target node representation , the feature extraction capabilities of most methods are still far from optimal , especially when only a small amount of data is labeled . However , in fact , compared with the expensive and laborious acquisition of labeled data , unlabeled data is much easier to obtain . Therefore , how to learn more useful representations with limited label information is the key direct of representation learning study . Methods of this issue , commonly referred to as semi-supervised learning , which essentially believe that the similar points have similar outputs . Thus , it can properly utilize the consistency of data to make full use of the rich information of unsupervised data . In the real world , it is common that we have data with specific topological structures which usually called graph data . The graph structure is usually expressed as the connection between nodes . By aggregating the features of neighborhood and performing appropriate linear transformation , graph neural networks ( GNNs ) can convert graph data into a low-dimensional , compact , and continuous feature space . Nevertheless , most of them only care about a single aggregation strategy , which is counter intuitive : for example , as far as social networks are concerned , the relationship between people is very complex , while , most of the traditional GNNs only consider the single connection between nodes and ignore other implicit information . In this paper , our work focuses on learning node representations by GNNs in a semi-supervised way . Despite there are already many graph-based semi-supervised learning methods ( Kipf & Welling , 2016 ; Yang et al. , 2016 ; Khan & Blumenstock , 2019 ) , most of them can only find a single relationship between nodes . As a result , some information in unsupervised data is usually ignored . To overcome this problem , we develop a novel dual graph complementary network ( DGCN ) to extract information from both feature and topology spaces . An intuition of our method is to learn based on disagreement : network performance is largely related to the quality of the graph , which usually emphasizes the relevance of an attribute of instances . So , since we don ’ t know what attributes are most important , we consider both of them in the model design . Compared with the traditional GNN-based methods , we perform two different aggregate strategies which emphasize different attributes in each branch , one from the perspective of node feature , and the other from the topological structure . Then , to further utilize implicit information , we employ two networks with different structures to extract embedding from input feature . By doing so , nodes ’ information can be propagated in different ways . Then , the supervised loss ℓsup and diversity constraint ℓdiv are used to guide the training . We use two different branches to extract common information in topology and feature spaces . By utilizing disagreements between the two branches , model can gain information that may be ignored by single branch . To prove the effectiveness of our method , we conducted experiments on five public benchmark datasets . The contributions of our work are summarized as follows : • We propose a novel dual graph complementary network ( DGCN ) to fuse complementary information , which utilizes different graphs to aggregate nodes that are similar in certain attributes in a complementary way . • By comparing with algorithms that use non-single graphs , it proves that our complementary architecture can extract richer information • Through extensive evaluation on multiple datasets , we demonstrate DGCN effectiveness over state-of-the-art baselines . 2 RELATED WORK . 2.1 SEMI-SUPERVISED LEARNING . Semi-supervised learning is usually aimed at the case of insufficient data labels . X ∈ Rn×d is the feature of input nodes . Y = [ yij ] ∈ Rn×k is the label matrix , where k is the class number . yij means that the i-th node belongs to the j-th class . Then split data points into labeled and unlabeled points . Accordingly , xL and xU express a feature of labeled and unlabeled instance , respectively . Moreover , the ground-truth label of the label nodes is available only . The main objective of semi-supervised learning is to extract supervised information from labeled dataset whilst adequately utilizing data distribution information contained in X . There are four categories of semi-supervised learning algorithms : 1 . Self-training semi-supervised learning ( Lee , 2013 ) : It utilizes high-confidence pseudo labels to expand label set . Ideally , it can continuously improve network performance , but is usually limited by the quality of pseudo labels . 2 . Graph-based semi-supervised learning : It propagates information between instances according to edges in graph . It ’ s an inductive learning method , of which the performance mainly depends on the aggregation algorithm . 3 . Low-density separation methods ( Joachims , 1999 ) : They assume that the decision hyperplane is consistent with the data distribution , so so it should pass through the sparse region of the data . 4 . Pretrain semi-supervised learning : such as autoencoder ( Vincent et al. , 2008 ; Rifai et al. , 2011 ) , trains the model based on reconstruction error and then fine tune it using labeled data . However , semi-supervised learning tasks prefer to obtain information related to data distribution rather than all information of samples . In this paper , we mainly focus on the graph-based semisupervised learning . 2.2 GRAPH-BASED SEMI-SUPERVISED LEARNING . In addition to features , graph-based semi-supervised learning methods ( Kipf & Welling , 2016 ) represent the topological edge connection between different instances . For many datasets , graph is given as a feature . If the features of the dataset do not contain the relationships between different samples , a graph can also be constructed by measuring the similarity between the features of the instances ( Zhu et al. , 2003 ) . Actually , the graph is a measure of whether the instances are closely connected . Then , according to this graph , information exchange between instances can be carried out , so that the information of unlabeled data can be effectively utilized . Network performance is largely related to the quality of the graph . When the attributes emphasized in the graph do not match the expectations of the task objective , misjudgments are often caused . Usually , it is difficult to finding what really matters . The traditional graph-based semi-supervised learning methods usually uses a single graph for node aggregation , which causes a single attribute to be emphatically considered , but when this attribute does not match the task goal , it will mislead the training instead . 3 DGCN ARCHITECTURE . In this section , we will present the overall framework of DGCN , see Fig . 1 . The main idea of DGCN is that information exchange under the control of graphs emphasizing different attributes can extract more abundant features . To this end , we use two branches to extract information from two inputs at the same time . The node features of these two inputs are the same , the only difference is the graphs that control the information exchange . In addition , in order to further expand the difference between branches , we use a diversity loss ℓdiv . 2 , l , H gat 1 , l and Hgat2 , l respectively . Then , we fuse GCN view and GAT view respectively to obtain H gcn c and H gat c respectively through attention operation . The obtained Hgcnc and H gat c are sent to the final attention layer together with the previous Hgcn1 , l , H gcn 2 , l , H gat 1 , l and H gat 2 , l . 3.1 NOTATION & PROBLEM STATEMENT . Let G = ( V , A , X ) be an undirected graph . V is the set of nodes on the graph , which is composed of unlabeled ( Vu ) and labeled ( Vl ) nodes with the number of nodes is nu and nl respectively . n = nl + nu is the number of nodes . A = [ aij ] ∈ Rn×n is the adjacency matrix . aij = 1 represents that node i and node j are closely related in an attribute , otherwise , aij = 0 . 3.2 BRANCHES . In order to capture different characteristics by the two branches ( also called viewer ) , we use different network structures for each branch : GCN ( Kipf & Welling , 2016 ) and GAT ( Veličković et al. , 2017 ) . Given a graph G = ( V , A , X ) , both GCN and GAT intend to extract richer features at a vertex by aggregating features of vertices from its neighborhood ( Li et al. , 2019 ) . So the node representation of the l-th layer Hl can be defined by : Hl = Update ( Aggregate ( Hl−1 , Θ agg l ) , Θ update l ) . ( 1 ) where Θaggl and Θ update l are the learnable weights of aggregation and update functions of the l-th layer respectively and the initial H0 = X . The aggregation and update functions are the essential components of GNNs , and obviously the features extracted by different aggregation functions will have certain differences . Thus , we take advantage of two different networks , GCN and GAT , to obtain node representation . The node features output by the l-th GCN layer can be expressed as : Hl = σ ( ( D̃− 1 2 ( A+ I ) D̃− 1 2 ) Hl−1Wl ) . ( 2 ) where I ∈ Rn×n indicates the identity matrix , A+ I means adding self-loop in the graph , D̃ is the diagonal degree matrix of A+ I , and σ ( · ) is the activation function . It can be seen from equation 2 that GCN aggregates neighbor features by weighting the value of symmetric normalized laplacian . Next , we introduce the algorithm GAT that uses the attention mechanism to calculate the neighbor weight . Through a learnable coefficient a , GAT can assign learnable weights to each neighbor of the node . For node i , the weight αij between it and its neighbor node j can be expressed as : αij = exp ( LeakyReLU ( a⊤ [ Whi∥Whj ] ) ) ∑ k∈Ni exp ( LeakyReLU ( a⊤ [ Whi∥Whk ] ) ) . ( 3 ) where ·⊤ is the transposition operation and ∥ represents concatenation . Then the forward propagation process of node v in l-th layer can be represented as : hl , i = ∥Mm=1σ ∑ j∈Ni αml , ijW m l hl−1 , j . ( 4 ) where , hl , i is the embeding of node i in the l-th layer . M is the number of independent attention mechanisms . σ is activation function of GAT . αmij is the normalized attention coefficients computed by the m-th attention mechanism , see equation 3 . As can be seen from equation 4 , the weights GAT assigns to a node ’ s neighbors are learnable . Thus we can assign adaptive weights to different neighbors . Although these two methods are based on the existence of connection between points as the premise of aggregation . Both the GCN and GAT models we use have their own advantages and disadvantages . The former considers the relationship between nodes ( probability conduction matrix ) , but can ’ t learn neighbor weights dynamically . Although the latter can assign dynamic weights to neighbors , it ignores the influence of degree attribute of node on aggregation . Therefore , using these two branches , we can extract more complementary features from the input .
The paper presents a GNN model to jointly encode both topology and feature graphs to enhance node representations' quality. In particular, the model DGCN uses two GCNs to learn and propagate two different types of node representations on the topology graph, respectively. The model also utilizes two GATs to learn and propagate two different types of node representations on the feature graph, respectively. Finally, the model leverages attention mechanisms on these four types of node representations to produce the final node embeddings.
SP:d1f2bfa043d6ce88a18bbef8a0e694e92ab43d2e
Emergent Properties of Foveated Perceptual Systems
The goal of this work is to characterize the representational impact that foveation1 operations have for machine vision systems , inspired by the foveated human visual2 system , which has higher acuity at the center of gaze and texture-like encoding in3 the periphery . To do so , we introduce models consisting of a first-stage fixed image4 transform followed by a second-stage learnable convolutional neural network,5 and we varied the first stage component . The primary model has a foveated-6 textural input stage , which we compare to a model with foveated-blurred input7 and a model with spatially-uniform blurred input ( both matched for perceptual8 compression ) , and a final reference model with minimal input-based compression.9 We find that : 1 ) the foveated-texture model shows similar scene classification10 accuracy as the reference model despite its compressed input , with greater i.i.d.11 generalization than the other models ; 2 ) the foveated-texture model has greater12 sensitivity to high-spatial frequency information and greater robustness to occlusion,13 w.r.t the comparison models ; 3 ) both the foveated systems , show a stronger center14 image-bias relative to the spatially-uniform systems even with a weight sharing15 constraint . Critically , these results are preserved over different classical CNN16 architectures throughout their learning dynamics . Altogether , this suggests that17 foveation with peripheral texture-based computations yields an efficient , distinct,18 and robust representational format of scene information , and provides symbiotic19 computational insight into the representational consequences that texture-based20 peripheral encoding may have for processing in the human visual system , while also21 potentially inspiring the next generation of computer vision models via spatially-22 adaptive computation.23 1 Introduction24 In the human visual system , incoming light is sampled with different resolution across the retina , a25 stark contrast to machines that perceive images at uniform resolution . One account for the nature of26 this foveated ( spatially-varying ) array in humans is related purely to sensory efficiency ( biophysical27 constraints ) ( Land & Nilsson , 2012 ; Eckstein , 2011 ) , e.g. , there is only a finite amount of retinal28 ganglion cells ( RGC ) that can relay information from the retina to the Lateral Geniculate Nucleus29 ( LGN ) constrained by the thickness of the optic nerve . Thus it is “ more efficient ” to have a moveable30 high-acuity fovea , rather than a non-moveable uniform resolution retina when given a limited number31 of photoreceptors as suggested in Akbas & Eckstein ( 2017 ) . Machines , however do not have such32 wiring/resource constraints – and with their already proven success in computer vision ( LeCun et al.,33 2015 ) – this raises the question if a foveated inductive bias is necessary for vision at all.34 However , it is also possible that foveation plays a functional role at the representational level , which35 may confer perceptual advantages – as most computational approaches have mainly focused on36 saccade planning ( Geisler et al. , 2006 ; Mnih et al. , 2014 ; Elsayed et al. , 2019 ; Daucé et al. , 2020 ) .37 This idea has remained elusive in computer vision , but popular in vision science , and has been38 explored both psychophysically ( Loschky et al. , 2019 ) and computationally ( Poggio et al. , 2014 ; 39 Submitted to 35th Conference on Neural Information Processing Systems ( NeurIPS 2021 ) . Do not distribute . Cheung et al. , 2017 ; Han et al. , 2020 ) . Other works that have suggested representational advantages of40 foveation include the work of Pramod et al . ( 2018 ) , where blurring the image in the periphery gave an41 increase in object recognition performance of computer vision systems by reducing their false positive42 rate . In Wu et al . ( 2018 ) ’ s GistNet , directly introducing a dual-stream foveal-peripheral pathway in a43 neural network boosted object detection performance via scene gist and contextual cueing . Relatedly,44 the most well known example of work that has directly shown the advantage of peripheral vision45 for scene processing in humans is Wang & Cottrell ( 2017 ) ’ s dual stream CNN that modelled the46 results of Larson & Loschky ( 2009 ) with a log-polar transform and adaptive Gaussian blurring ( RGC-47 convergence ) . Taken together , these studies present support for the idea that foveation has useful48 representational consequences for perceptual systems . Further , these computational examples have49 symbiotic implications for understanding biological vision , indicating what the functional advantages50 of foveation in humans may be , via functional advantages in machine vision systems.51 Importantly , none of these studies introduce the notion of texture representation in the periphery – a52 key property of peripheral computation as posed in Rosenholtz ( 2016 ) . What functional consequences53 does this well-known texture-based coding in the visual periphery have , if any , on the nature of54 later stage visual representation ? Here we directly examine this question . Specifically , we introduce55 perceptual systems : as two-stage models that have an image transform stage followed by a deep56 convolutional neural network . The primary model class of interest possesses a first stage image57 transform that mimics texture-based foveation via visual crowding ( Levi , 2011 ; Pelli , 2008 ; Doerig58 et al. , 2019b , a ) in the periphery as shown in Figure 1 ( Deza et al. , 2019 ) , rather than Gaussian59 blurring ( Wang & Cottrell , 2017 ; Pramod et al. , 2018 ; Malkin et al. , 2020 ) or compression ( Patney60 et al. , 2016 ; Kaplanyan et al. , 2019 ) . These rendered images capture image statistics akin to those61 preserved in human peripheral vision , and resembling texture computation at the stage of area V2 , as62 argued in Freeman & Simoncelli ( 2011 ) ; Rosenholtz ( 2016 ) ; Wallis et al . ( 2019 ) .63 Our strategy is thus to compare in terms of generalization , robustness and bias these foveation-texture64 models to three other kinds of models . The first comparison model class – foveation-blur models –65 uses the same spatially-varying foveation operations but uses blur rather than texture based input.66 The second class – uniform-blur models – uses a blur operation uniformly over the input , with the67 level of blur set to match the perceptual compression rates of the foveation-texture nets . Finally , the68 last comparison model class is the reference , which has minimal distortion , and serves as a perceptual69 upper bound from which to assess the impact of these different first-stage transforms.70 Note that our approach is different from the one taken by Wang & Cottrell ( 2017 ) , who have built71 foveated models that fit results to human behavioural data like those of Larson & Loschky ( 2009 ) .72 Rather , our goal is to explore the emergent properties in CNNs with texture-based foveation on scene73 representation compared to their controls agnostic to any behavioural data or expected outcome.74 Naturally , the results of our experimental paradigm is symbiotic as it can shed light into both75 the importance of texture-based peripheral computation in humans , and could also suggest a new76 inductive bias for advanced machine perception in scenes.77 2 Perceptual Systems78 We define perceptual systems as two-stage models with an image transform ( stage 1 , f ( ◦ ) : RD →79 RD ) , that is relayed to a deep convolutional neural network ( stage 2 , g ( ◦ ) : RD → Rd ) . Note that the80 first transform stage is a fixed operation over the input image , while the second stage has learnable81 parameters . In general , the perceptual system S ( ◦ ) , with retinal image input I : RD is defined as:82 S ( I ) = g ( f ( I ) ) ( 1 ) Such two-stage models have been growing in popularity , and the reasons these models are designed to83 not be fully end-to-end differentiable is mainly to force one type of computation into the first-stage of a84 system such that the second-stage g ( ◦ ) must figure out how to capitalize on such forced transformation85 and thus assess its f ( ◦ ) representational consequences ( See Figure 2 ) . For example , Parthasarathy & 86 Simoncelli ( 2020 ) successfully imposed V1-like computation in stage 1 to explore the learned role87 of texture representation in later stages with a self-supervised objective , and Dapello et al . ( 2020 ) 88 found that fixing V1-like computation also at stage 1 aided adversarial robustness . At a higher level,89 our objective is similar where we would like to force a texture-based peripheral coding mechanism90 ( loosely inspired by V2 ; Ziemba et al. , 2016 ) at the first stage to check if the perceptual system ( now91 foveated ) will learn to pick-up on this newly made representation through g ( ◦ ) and make ‘ good ’ use92 of it potentially shedding light on the functionality hypothesis for machines and humans.93 2.1 Stage 1 : Image Transform94 To model the computations of a texture-based foveated visual system , we employed the model95 of Deza et al . ( 2019 ) ( henceforth Foveated-Texture Transform ) . This model is inspired by the metamer96 synthesis model of Freeman & Simoncelli ( 2011 ) , where new images are rendered to have locally97 matching texture statistics ( Portilla & Simoncelli , 2000 ; Balas et al. , 2009 ) in greater size pooling98 regions of the visual periphery with structural constraints . Analogously , the Deza et al . ( 2019 ) 99 Foveation Transform uses a foveated feed-forward style transfer ( Huang & Belongie , 2017 ) network100 to latently perturb the image in the direction of its locally matched texture ( see Figure 1 ) . Altogether,101 f : RD → RD is a convolutional auto-encoder that is non-foveated when the latent space is un-102 perturbed : f0 ( I ) = D ( E ( I ) ) , but foveated ( ◦Σ ) when the latent space is perturbed via localized style103 transfer : f∗ ( I ) = D ( EΣ ( I ) ) , for a given encoder-decoder ( E , D ) pair.104 Note that with proper calibration , the resulting distorted image can be a visual metamer ( for a human ) ,105 which is a carefully perturbed image perceptually indistinguishable from its reference image ( Freeman106 & Simoncelli , 2011 ; Rosenholtz et al. , 2012 ; Feather et al. , 2019 ; Vacher et al. , 2020 ) . However,107 importantly in the present work , we exaggerated the strength of these texture-driven distortions108 ( beyond the metameric boundary ) , as our aim here is to understand the implications of this kind109 of texturized peripheral input on later stage representations ( e.g . following a similar approach as110 Dapello et al . ( 2020 ) ) . By having an extreme manipulation , we reasoned this would accentuate the111 consequences of these distortions , making them more detectable in our subsequent experiments.112 2.2 Stage 2 : Convolutional Neural Network backbone113 The transformed images ( stage 1 ) are passed into a standard convolutional neural network architecture.114 Here we tested two different base architectures : AlexNet ( Krizhevsky et al. , 2012 ) , and ResNet18 ( He115 et al. , 2016 ) . The goal of running these experiments on two different hierarchically local architectures116 is to let us examine the consequences across all image transforms ( with our main focus towards117 texture-based foveation ) that are robust to these different network architectures . Further , this CNN118 backbone ( g : RD → Rd ) should not be viewed in the traditional way of an end-to-end input/output119 system where the input is the retinal image ( I ) , and the output is a one-hot vector encoding a d-class-120 label in Rd . Rather , the CNN ( g ) acts as a loose proxy of higher stages of visual processing ( as it121 receives input from f ) , analogous to the 2-stage model of Lindsey et al . ( 2019 ) .122 2.3 Critical Manipulations : Foveated vs Non-Foveated Perceptual Systems123 Now , we can define the first two of the four perceptual systems that will perform 20-way scene124 categorization : Foveation-Texture , receives an image input , applies the foveation-texture transform125 f∗ ( ◦ ) , and relays it through the CNN g ( ◦ ) . Similarly , Reference performs a non-foveated transform126 f0 ( ◦ ) , where images are sent through the same convolutional auto-encoder D ( E ( I ) ) of f∗ ( ◦ ) , but127 with the parameter that determines the degree of texture style transfer set to 0 – producing an upper-128 bounded , compressed and non-foveated reference image – then relayed through the CNN g ( ◦ ) . Both129 of these systems are depicted in Figure 2 ( A ) . As the foveation-texture model has less information130 from the input , relative to the reference networks , we next designed two further comparison models131 which have a comparable amount of information after the input stage , but with different amounts of132 blurring in the stage 1 operations . To create matched-resources systems , our broad approach was to133 use a Rate-Distortion ( RD ) optimization procedure ( Ballé et al. , 2016 ) to match information between134 the stage 1 operations , given the SSIM ( Wang et al. , 2004 ) image quality assessment ( IQA ) metric.135 Specifically , to create matched-resource Uniform-Blur , we identified the standard deviation of the136 Gaussian blurring kernel ( the ‘ distortion ’ D ) , such that we could render a perceptually resource-137 matched Gaussian blurred image – w.r.t Reference – that matches the perceptual transmission ‘ rate ’ 138 R of Foveation-Texture via the SSIM perceptual metric ( Wang et al. , 2004 ) . This procedure yields a139 model class with uniform blur across the image , but with matched stage 1 information content as the140 Foveation-Texture . And , to create matched-resource Foveation-Blur , we carried our this same RD141 optimization pipeline per eccentricity ring ( assuming homogeneity across pooling regions at the same142 eccentricity ) , thus finding a set of blurring coefficients that vary as a function of eccentricity . This143 procedures yielded a different matched-resource model class , this time with spatially-varying blur.144 Figure 3 ( B ) summarizes our solution to this problem . Details of the RD Optimization are presented145 in Appendix A.146 Ultimately , it is important to note that the selection of the perceptual metric ( SSIM in our case ) ,147 plays a role in this optimization procedure , and sets the context in which we can call a network148 “ resource-matched ” . We selected SSIM given its monotonic relationship of distortions to human149 perceptual judgements , symmetric upper-bounded nature , sensitivity to contrast , local structure and150 spatial frequency , and popularity in the Image Quality Assessment ( IQA ) community . However151 to anticipate any possible discrepancy in the interpretability of our future results , we additionally152 computed the Mean Square Error ( MSE ) , MS-SSIM , and 11 other IQA metrics as recently explored153 in Ding et al . ( 2020 ) to compare all other image transforms to the Reference on the testing set.154 Our logic is the following : if the MSE is greater ( ↑ ) for Foveation-Texture compared to Foveation-155 Blur and Uniform-Blur , then the current distortion levels place Foveation-Texture at a resource156 ‘ disadvantage ’ relative to the other transforms , and any interesting results would not only hold but157 also be strengthened . This same logic applies to the other IQA metrics contingent on their direction158 of greater distortion . Indeed , these patterns of results were evident across IQA metrics – except those159 tolerant to texture such as DISTS ( Ding et al. , 2020 ) – as shown in Table 1 , and Appendix C.160 3 Experiments161 Altogether , the 4 previously introduced perceptual systems162 help us answer three key questions that we should have163 in mind throughout the rest of the paper : 1 ) Foveation-164 Texture vs Reference will tell us how a texture-based165 foveation mechanism will compare to its perceptual upper-166 bound – shedding light into arguments about computa-167 tional efficiency . 2 ) Foveation-Texture vs Foveation-Blur168 will tell us if any potentially interesting pattern of results169 is due to the type/stage of foveation . This will help us170 measure the contributions of the adaptive texture coding171 vs adaptive gaussian blurring ; 3 ) Foveation-Texture vs172 Uniform-Blur will tell us how do these perceptual systems173 ( one foveated , and the other one not ) behave when allo-174 cated with a fixed number of perceptual resources under175 certain assumptions – potentially shedding light on why176 biological organisms like humans have foveated texture-177 based computation in the visual field instead of uniform178 spatial processing like modern machines.179 Dataset : All previously introduced models were trained180 to perform 20-way scene categorization . Scene categories181 were selected from the Places2 dataset ( Zhou et al. , 2017 ) ,182 and were re-partitioned into a new 4500 images per cate-183 gory for training , 250 per category for validation , and 250184 per category for testing . The categories included were:185 aquarium , badlands , bedroom , bridge , campus , corridor , forest path , highway , hospital , industrial186 area , japanese garden , kitchen , mansion , mountain , ocean , office , restaurant , skyscraper , train interior,187 waterfall . Samples of these scenes coupled with their image transforms can be seen in Figure 4.188 Networks : Training : Convolutional neural networks of the stage 2 of each perceptual system were189 trained which resulted in 40 image-transform based networks per architecture ( AlexNet/ResNet18 ) :190 10 Foveation-Texture , 10 Reference , 10 Uniform-Blur , 10 Foveation-Blur ; totalling 80 trained191 networks to compute relevant error bars shown in all figures ( standard deviations , not standard errors ) 192 and to reduce effects of randomness driven by the particular network initialization . All systems were193 paired such that their stage 2 architectures g ( ◦ ) started with the same random weight initialization194 prior to training . Testing : The networks of each perceptual system were tested on the same type of195 image distribution they were trained on . Learning Dynamics : Available in Appendix H.196 3.1 Texture-based foveation provides greater i.i.d . generalization than Blur-based foveation197 How well does the foveation-texture stage classify scene images ( i.i.d . generalization ) compared to198 the other matched-resource models that use blurring and the reference ? The results can be seen in199 Figure 5 . Each bars ’ height reflects overall accuracy for each of the 10 neural network backbone200 runs ( g ( ◦ ) ) per system , with a square marker at the top indicating the i.i.d . accuracy . We found that201 Foveation-Texture had similar i.i.d . performance to the Reference – which is the the undistorted202 perceptual upper bound , and greater performance than both Uniform-Blur and Foveation-Blur . Thus203 the compression induced by foveated-texture generally maintains scene category information.204 We next performed a contrived experiment where we tested how well each perceptual system could205 classify the stage 1 outputs of the other models . For example , we showed a set of foveated blurred206 images to a network trained on foveated texture images . This experiment is in essence a test of207 out-of-distribution ( o.o.d . ) generalization . The results of these tests are also shown in Figure 5 . For208 each model , the classification accuracy for the inputs from the other stage 1 images is indicated by209 the height of the different colored diamonds , where the color corresponds to the stage 1 operation.210 This experiment yielded a rather complex set of patterns , that even differed depending on the211 architecture ( AlexNet vs ResNet18 as g ( ◦ ) ) . Generally , the Foveation-Texture model had a similar212 profile of generalization as the Reference model . However , the networks trained with different types213 of blur ( Uniform-Blur & Foveated-Blur ) in some cases showed very high o.o.d . generalization –214 though once again this is contingent on g ( ◦ ) .215 Unraveling the underlying causes to understand this last set of results sets the stage for our experiments216 in the rest of this section . So far it seems like Foveation-Texture has learned to properly capitalize the217 texture information in the periphery and still out-perform all other matched-resource systems even if218 heavily penalized under several IQA metrics ( Table 1 ) – highlighting the critical differences in texture219 vs blur for scene processing . As for the interaction of Uniform-Blur with g ( ◦ ) , is is likely that the220 residual connections are counter-productive to o.o.d . generalization ( or it has overfit ) . Interestingly,221 humans have a combination of texture and adaptive-gaussian based peripheral computation ( Ehinger222 & Rosenholtz , 2016 ) , so future work should look into the effects of continual learning , joint-training223 or a combined image transform ( Texture + Blur ) to merge gains of both i.i.d and o.o.d generalization.224 3.2 Texture-based foveated systems preserve greater high-spatial frequency sensitivity225 We next examined whether the learned feature representations of these models are more reliant on low226 or high pass spatial frequency information . To do so , we filtered the testing image set at multiple levels227 to create both high pass and low pass frequency stimuli and assessed scene-classification performance228 over these images for all models , as shown in Figure 6 . Low pass frequency stimuli were rendered by229 convolving a Gaussian filter of standard deviation σ = [ 0 , 1 , 3 , 5 , 7 , 10 , 15 , 40 ] pixels on the foveation230 transform ( f0 , f̂0 , f∗ , f̂∗ ) outputs . Similarly , the high pass stimuli was computed by subtracting the231 reference image from its low pass filtered version with σ = [ ∞ , 3 , 1.5 , 1 , 0.7 , 0.55 , 0.45 , 0.4 ] pixels232 and adding a residual . These are the same values used in the experiments of Geirhos et al . ( 2019 ) .233 We found that Foveation-Texture and Reference trained networks were more sensitive to High234 Pass Frequency information , while Foveation-Blur and Uniform-Blur were selective to Low Pass235 Frequency stimuli . Although one may naively assume that this is an expected result – as both236 Foveation-Blur and Uniform-Blur networks are exposed to a blurring procedure – it is important to237 note that : 1 ) the foveal resolution has been preserved between Foveation-Texture and Foveation-Blur238 ( See Fig . 4 ) , thus high spatial frequency sensitivity could have still predominated in Foveation-Blur239 but it did not ( though see Fig . 6 A2/B2 where these high pass Gabors are still learned , implying240 that higher layers in g ( ◦ ) overshadow their computation ) ; and 2 ) Foveation-Texture could have241 also learned to develop low spatial frequency sensitivity given the crowding/texture-like peripheral242 distortion , but this was not the case ( likely due to the weight sharing constraint embedded in the243 CNN architecture Elsayed et al. , 2020 ) . Finally , the robustness to low-pass filtering of Foveation-Blur244 suggests that foveation via adaptive gaussian blurring may implicitly contribute to scale-invariance as245 also shown in Poggio et al . ( 2014 ) ; Cheung et al . ( 2017 ) ; Han et al . ( 2020 ) .246 3.3 Texture-based foveation develops greater robustness to occlusion247 We next examined how all perceptual systems could classify scene information under conditions248 of visual field loss , either from left to right ( left2right ) , top to bottom ( top2bottom ) , center part of249 the image ( scotoma ) , or the periphery ( glaucoma ) . This manipulation lets us examine the degree250 to which learned representations relying on different parts of the image to classify scene categories.251 Critically , here we apply the occlusion after the stage 1 operation . The results are shown in Figure 7.252 Overall we found that , across all types of occlusion the Foveation-Texture modules have greater ro-253 bustness to occlusion than both the Foveation-Blur and Uniform-Blur models . Further , the Foveation-254 Texture models have nearly equivalent performance to the Reference . In contrast , both models with255 blurring , whether uniformly or in a spatially-varying way , were far worse at classifying scenes under256 conditions of visual field loss . These results highlight that the texture-based information content257 captured by the foveation-texture nets preserves scene category content in dramatically different way258 than simple lower-resolution sampling – perhaps using the texture-bias ( Geirhos et al. , 2019 ) in their259 favor ; as humans too use texture as their classification strategy for scenes ( Renninger & Malik , 2004 ) .260 In addition , the Foveation-Texture model is not overfitting . As recent work has suggested an Accuracy261 vs Robustness trade-off where networks trained to outperform under the i.i.d . generalization condition262 will do worse under other perceptual tasks – mainly adversarial ( Zhang et al. , 2019 ) – we did not263 observe such trade-off and a greater accuracy did not imply lower robustness to occlusion.264 3.4 Foveated systems learn a stronger center image bias than non-foveated systems265 It is possible that foveated systems weight visual information strongly in the foveal region than the266 peripheral region as hinted by our occlusion results ( the different rate of decay for the accuracy curves267 in the Scotoma and Glaucoma conditions ) . To resolve this question , we conducted an experiment268 where we created a windowed cue-conflict stimuli where we re-rendered our set of testing images269 with one image category in the fovea , and another one in the periphery ( all aligned with a different270 class systematically ; ex : aquarium with badlands ) . We also had an additional condition where the271 conflicting cue was now square-like and uniformly and randomly paired with a conflicting scene272 class and more finely sampled . We then systematically varied the fovea-periphery visual area ratios273 & re-examined classification accuracy for both the foveal and peripheral scenes ( Figure 8 ) .274 We found that the Foveation-Texture and Foveation-Blur transform imposed the networks g ( ◦ ) to275 learn to weigh information in the center of the image stronger than Reference & Uniform-Blur for276 scene categorization . A qualitative way of seeing this foveal-bias is by checking the foveal/peripheral277 ratio where these two accuracy lines cross . The more leftward the cross-over point ( ⊗ ) , the higher the278 foveal bias ( highlighted through the vertical bars ) . This result was unexpected as we initially predicted279 that g ( ◦ ) would weigh the peripheral information stronger as it has been implicitly regularized through280 a distortion . However this was not the case and our findings are similar to Wang & Cottrell ( 2017 ) 281 who showed this foveal bias on a foveated system with adaptive blur with a dual-stream neural282 network . Thus , these results indicate that the spatially varying computation from center to periphery283 is mainly responsible for the development of a center image bias even with a weight sharing constraint.284 Furthermore , it is possible that one of the functions of any spatially-varying coding mechanisms285 in the visual field is to enforce the perceptual system to attend on the foveal region – avoiding the286 shortcut of learning to attend the entire visual field if unnecessary ( Geirhos et al. , 2020 ) .287 4 Discussion288 The present work was designed to probe the impact of foveated texture-based input representations in289 machine vision systems . To do this we specifically compared the learned perceptual signatures in290 the second-stage of visual processing across a set of of networks trained on other image transforms.291 We found that when comparing Foveation-Texture to their matched-resource models that differed in292 computation : Foveation-Blur ( foveated w/ adaptive gaussian blur ) and Uniform-Blur ( non-foveated293 w/ uniform blur ) – that peripheral texture encoding did lead to specific representational signatures,294 particularly greater i.i.d generalization , preservation of high-spatial frequency sensitivity , and ro-295 bustness to occlusion – even as high as its perceptual upper bound ( Reference ) . We also found that296 foveation ( in general ) seems to induce a focusing mechanism , servicing the foveal/central region –297 whereas neither a perceptually upper-bounded system ( Reference ) or a non-foveated compressed298 system ( Uniform-Blur ) did not develop as strongly.299 The particular consequences of our foveation stage raises interesting future directions about what300 computational advantages could arise when trained on object categorization ( Pramod et al. , 2018 ) 301 coupled with eye-movements ( Akbas & Eckstein , 2017 ; Deza et al. , 2017 ) , as objects are typically302 centered in view and have different hierarchical/compositional priors than scenes ( Zhou et al . ( 2014 ) ; 303 Deza et al . ( 2020 ) ) in addition to different processing mechanisms ( Renninger & Malik ( 2004 ) ; 304 Ehinger & Rosenholtz ( 2016 ) ) . We are currently exploring the impact of these foveated texture-based305 representational signatures on shape vs texture bias for object recognition similar to Geirhos et al.306 ( 2019 ) and Hermann et al . ( 2020 ) , and assessing their interaction with scene representation.307 Further , a future direction is investigating the effects of texture-based foveation to adversarial308 robustness . Motivated by the recent work of Dapello et al . ( 2020 ) which has shown promise of309 adversarial robustness via enforcing stochasticity and V1-like computation by obeying the Nyquist310 sampling frequency of these filters w.r.t the image ( Serre et al. , 2007 ) in addition to a natural gamut of311 orientations and frequencies as studied in De Valois et al . ( 1982 ) , it raises the question of how much312 we can further push for robustness in hybrid perceptual systems like these , drawing on even more313 biological mechanisms . Works such as Luo et al . ( 2015 ) and recently Reddy et al . ( 2020 ) ; Kiritani & 314 Ono ( 2020 ) have already taken steps in this direction by coupling fixations with a spatially-varying315 retina . However , the representational impact of texture-based foveation on adversarial robustness,316 and its symbiotic implication for human vision still remains an open question.317 References318 Akbas , E. and Eckstein , M. P. Object detection through search with a foveated visual system . PLoS319 computational biology , 13 ( 10 ) : e1005743 , 2017.320 Balas , B. , Nakano , L. , and Rosenholtz , R. A summary-statistic representation in peripheral vision321 explains visual crowding . Journal of vision , 9 ( 12 ) :13–13 , 2009.322 Ballé , J. , Laparra , V. , and Simoncelli , E. P. End-to-end optimized image compression . arXiv preprint323 arXiv:1611.01704 , 2016.324 Cheung , B. , Weiss , E. , and Olshausen , B . Emergence of foveal image sampling from learning to325 attend in visual scenes . International Conference on Learning Representations ( ICLR ) , 2017.326 Dapello , J. , Marques , T. , Schrimpf , M. , Geiger , F. , Cox , D. D. , and DiCarlo , J. J. Simulating a327 primary visual cortex at the front of cnns improves robustness to image perturbations . BioRxiv,328 2020.329 Daucé , E. , Albiges , P. , and Perrinet , L. U . A dual foveal-peripheral visual processing model330 implements efficient saccade selection . Journal of Vision , 20 ( 8 ) :22–22 , 2020.331 De Valois , R. L. , Yund , E. W. , and Hepler , N. The orientation and direction selectivity of cells in332 macaque visual cortex . Vision research , 22 ( 5 ) :531–544 , 1982.333 Deza , A. and Eckstein , M. Can peripheral representations improve clutter metrics on complex scenes ? 334 In Advances in Neural Information Processing Systems , pp . 2847–2855 , 2016.335 Deza , A. , Peters , J. R. , Taylor , G. S. , Surana , A. , and Eckstein , M. P. Attention allocation aid for336 visual search . In Proceedings of the 2017 CHI Conference on Human Factors in Computing337 Systems , pp . 220–231 , 2017.338 Deza , A. , Jonnalagadda , A. , and Eckstein , M. P. Towards metamerism via foveated style transfer . In339 International Conference on Learning Representations , 2019 . URL https : //openreview.net/340 forum ? id=BJzbG20cFQ.341 Deza , A. , Liao , Q. , Banburski , A. , and Poggio , T. Hierarchically local tasks and deep convolutional342 networks . CBMM Memo , 2020.343 Ding , K. , Ma , K. , Wang , S. , and Simoncelli , E. Image quality assessment : Unifying structure and344 texture similarity . IEEE Transactions on Pattern Analysis and Machine Intelligence , 2020.345 Ding , K. , Ma , K. , Wang , S. , and Simoncelli , E. P. Comparison of Image Quality Models for346 Optimization of Image Processing Systems . arXiv e-prints , art . arXiv:2005.01338 , May 2020.347 Doerig , A. , Bornet , A. , Choung , O. H. , and Herzog , M. H. Crowding reveals fundamental differences348 in local vs. global processing in humans and machines . bioRxiv , 2019a . doi : 10.1101/744268.349 URL https : //www.biorxiv.org/content/early/2019/08/23/744268.350 Doerig , A. , Bornet , A. , Rosenholtz , R. , Francis , G. , Clarke , A. M. , and Herzog , M. H. Beyond351 bouma ’ s window : How to explain global aspects of crowding ? PLoS computational biology , 15352 ( 5 ) : e1006580 , 2019b.353 Eckstein , M. P. Visual search : A retrospective . Journal of vision , 11 ( 5 ) :14–14 , 2011.354 Eckstein , M. P. , Koehler , K. , Welbourne , L. E. , and Akbas , E. Humans , but not deep neural networks,355 often miss giant targets in scenes . Current Biology , 27 ( 18 ) :2827–2832 , 2017.356 Ehinger , K. A. and Rosenholtz , R. A general account of peripheral encoding also predicts scene357 perception performance . Journal of Vision , 16 ( 2 ) :13–13 , 2016.358 Elsayed , G. , Kornblith , S. , and Le , Q. V. Saccader : Improving accuracy of hard attention models for359 vision . In Advances in Neural Information Processing Systems , pp . 700–712 , 2019.360 Elsayed , G. , Ramachandran , P. , Shlens , J. , and Kornblith , S. Revisiting spatial invariance with361 low-rank local connectivity . In International Conference on Machine Learning , pp . 2868–2879.362 PMLR , 2020.363 Feather , J. , Durango , A. , Gonzalez , R. , and McDermott , J. Metamers of neural networks reveal diver-364 gence from human perceptual systems . In Wallach , H. , Larochelle , H. , Beygelzimer , A. , d'Alché-365 Buc , F. , Fox , E. , and Garnett , R . ( eds . ) , Advances in Neural Information Processing Systems366 32 , pp . 10078–10089 . Curran Associates , Inc. , 2019 . URL http : //papers.nips.cc/paper/367 9198-metamers-of-neural-networks-reveal-divergence-from-human-perceptual-systems.368 pdf.369 Freeman , J. and Simoncelli , E. Metamers of the ventral stream . Nature neuroscience , 14 ( 9 ) :370 1195–1201 , 2011.371 Fridman , L. , Jenik , B. , Keshvari , S. , Reimer , B. , Zetzsche , C. , and Rosenholtz , R. Sideeye : A genera-372 tive neural network based simulator of human peripheral vision . arXiv preprint arXiv:1706.04568,373 2017.374 Gatys , L. A. , Ecker , A. S. , and Bethge , M. Image style transfer using convolutional neural networks.375 In Proceedings of the IEEE conference on computer vision and pattern recognition , pp . 2414–2423,376 2016.377 Geirhos , R. , Temme , C. R. , Rauber , J. , Schütt , H. H. , Bethge , M. , and Wichmann , F. A. Generalisation378 in humans and deep neural networks . In Advances in Neural Information Processing Systems , pp.379 7538–7550 , 2018.380 Geirhos , R. , Rubisch , P. , Michaelis , C. , Bethge , M. , Wichmann , F. A. , and Brendel , W. Imagenet-381 trained CNNs are biased towards texture ; increasing shape bias improves accuracy and robustness.382 In International Conference on Learning Representations , 2019 . URL https : //openreview.383 net/forum ? id=Bygh9j09KX.384 Geirhos , R. , Jacobsen , J.-H. , Michaelis , C. , Zemel , R. , Brendel , W. , Bethge , M. , and Wichmann , F. A.385 Shortcut learning in deep neural networks . arXiv preprint arXiv:2004.07780 , 2020.386 Geisler , W. S. and Perry , J. S. Real-time foveated multiresolution system for low-bandwidth video387 communication . In Human vision and electronic imaging III , volume 3299 , pp . 294–305 . Interna-388 tional Society for Optics and Photonics , 1998.389 Geisler , W. S. , Perry , J. S. , and Najemnik , J . Visual search : The role of peripheral information390 measured using gaze-contingent displays . Journal of Vision , 6 ( 9 ) :1–1 , 2006.391 Han , Y. , Roig , G. , Geiger , G. , and Poggio , T. Scale and translation-invariance for novel objects in392 human vision . Scientific Reports , 10 ( 1 ) :1–13 , 2020.393 He , K. , Zhang , X. , Ren , S. , and Sun , J . Deep residual learning for image recognition . In Proceedings394 of the IEEE conference on computer vision and pattern recognition , pp . 770–778 , 2016.395 Hermann , K. L. , Chen , T. , and Kornblith , S. The origins and prevalence of texture bias in convolutional396 neural networks . Neural Information Processing Systems , 2020.397 Huang , X. and Belongie , S. Arbitrary style transfer in real-time with adaptive instance normalization.398 In Proceedings of the IEEE International Conference on Computer Vision , pp . 1501–1510 , 2017.399 Kaplanyan , A. S. , Sochenov , A. , Leimkühler , T. , Okunev , M. , Goodall , T. , and Rufo , G. Deepfovea:400 neural reconstruction for foveated rendering and video compression using learned statistics of401 natural videos . ACM Transactions on Graphics ( TOG ) , 38 ( 6 ) :1–13 , 2019.402 Kiritani , T. and Ono , K. Recurrent attention model with log-polar mapping is robust against403 adversarial attacks . arXiv preprint arXiv:2002.05388 , 2020.404 Krizhevsky , A. , Sutskever , I. , and Hinton , G. E. Imagenet classification with deep convolutional405 neural networks . In Advances in neural information processing systems , pp . 1097–1105 , 2012.406 Land , M. F. and Nilsson , D.-E . Animal eyes . Oxford University Press , 2012.407 Laparra , V. , Ballé , J. , Berardino , A. , and Simoncelli , E. P. Perceptual image quality assessment using408 a normalized laplacian pyramid . Electronic Imaging , 2016 ( 16 ) :1–6 , 2016.409 Larson , A. M. and Loschky , L. C. The contributions of central versus peripheral vision to scene gist410 recognition . Journal of Vision , 9 ( 10 ) :6–6 , 2009.411 Larson , E. C. and Chandler , D. M. Most apparent distortion : full-reference image quality assessment412 and the role of strategy . Journal of electronic imaging , 19 ( 1 ) :011006 , 2010.413 LeCun , Y. , Bengio , Y. , and Hinton , G. Deep learning . nature , 521 ( 7553 ) :436 , 2015.414 Levi , D. M. Visual crowding . Current Biology , 21 ( 18 ) : R678–R679 , 2011.415 Lindsey , J. , Ocko , S. A. , Ganguli , S. , and Deny , S. The effects of neural resource constraints on416 early visual representations . In International Conference on Learning Representations , 2019 . URL417 https : //openreview.net/forum ? id=S1xq3oR5tQ.418 Loschky , L. C. , Szaffarczyk , S. , Beugnet , C. , Young , M. E. , and Boucart , M. The contributions of419 central and peripheral vision to scene-gist recognition with a 180 visual field . Journal of Vision , 19420 ( 5 ) :15–15 , 2019.421 Luo , Y. , Boix , X. , Roig , G. , Poggio , T. , and Zhao , Q. Foveation-based mechanisms alleviate422 adversarial examples . arXiv preprint arXiv:1511.06292 , 2015.423 Malkin , E. , Deza , A. , and tomaso a poggio . { CUDA } -optimized real-time rendering of a foveated424 visual system . In NeurIPS 2020 Workshop SVRHM , 2020 . URL https : //openreview.net/425 forum ? id=ZMsqkUadtZ7.426 Mnih , V. , Heess , N. , Graves , A. , et al . Recurrent models of visual attention . In Advances in neural427 information processing systems , pp . 2204–2212 , 2014.428 Parthasarathy , N. and Simoncelli , E. P. Self-supervised learning of a biologically-inspired visual429 texture model . arXiv preprint arXiv:2006.16976 , 2020.430 Patney , A. , Salvi , M. , Kim , J. , Kaplanyan , A. , Wyman , C. , Benty , N. , Luebke , D. , and Lefohn , A.431 Towards foveated rendering for gaze-tracked virtual reality . ACM Transactions on Graphics ( TOG ) ,432 35 ( 6 ) :179 , 2016.433 Pelli , D. G. Crowding : A cortical constraint on object recognition . Current opinion in neurobiology,434 18 ( 4 ) :445–451 , 2008.435 Poggio , T. , Mutch , J. , and Isik , L. Computational role of eccentricity dependent cortical magnification.436 arXiv preprint arXiv:1406.1770 , 2014.437 Portilla , J. and Simoncelli , E. P. A parametric texture model based on joint statistics of complex438 wavelet coefficients . International journal of computer vision , 40 ( 1 ) :49–70 , 2000.439 Pramod , R. T. , Katti , H. , and Arun , S. P. Human peripheral blur is optimal for object recognition.440 arXiv preprint arXiv:1807.08476 , 2018.441 Reddy , M. V. , Banburski , A. , Pant , N. , and Poggio , T. Biologically inspired mechanisms for442 adversarial robustness . arXiv preprint arXiv:2006.16427 , 2020.443 Renninger , L. W. and Malik , J . When is scene identification just texture recognition ? Vision research,444 44 ( 19 ) :2301–2311 , 2004.445 Rosenholtz , R. Capabilities and limitations of peripheral vision . Annual Review of Vision Science , 2:446 437–457 , 2016.447 Rosenholtz , R. , Huang , J. , Raj , A. , Balas , B. J. , and Ilie , L. A summary statistic representation in448 peripheral vision explains visual search . Journal of vision , 12 ( 4 ) :14–14 , 2012.449 Russakovsky , O. , Deng , J. , Su , H. , Krause , J. , Satheesh , S. , Ma , S. , Huang , Z. , Karpathy , A. , Khosla,450 A. , Bernstein , M. , et al . Imagenet large scale visual recognition challenge . International journal of451 computer vision , 115 ( 3 ) :211–252 , 2015.452 Serre , T. , Wolf , L. , Bileschi , S. , Riesenhuber , M. , and Poggio , T. Robust object recognition with453 cortex-like mechanisms . IEEE transactions on pattern analysis and machine intelligence , 29 ( 3 ) :454 411–426 , 2007.455 Sheikh , H. R. and Bovik , A. C. Image information and visual quality . IEEE Transactions on image456 processing , 15 ( 2 ) :430–444 , 2006.457 Shumikhin , M. M. A. Quantitative measures of crowding susceptibility in peripheral vision for large458 datasets . PhD thesis , Massachusetts Institute of Technology , 2020.459 Vacher , J. , Davila , A. , Kohn , A. , and Coen-Cagli , R. Texture interpolation for probing visual460 perception . Advances in Neural Information Processing Systems , 33 , 2020.461 Wallis , T. S. , Funke , C. M. , Ecker , A. S. , Gatys , L. A. , Wichmann , F. A. , and Bethge , M. Image462 content is more important than bouma ’ s law for scene metamers . eLife , 8 : e42512 , 2019.463 Wallis , T. S. A. , Funke , C. M. , Ecker , A. S. , Gatys , L. A. , Wichmann , F. A. , and Bethge , M. A464 parametric texture model based on deep convolutional features closely matches texture appearance465 for humans . Journal of Vision , 17 ( 12 ) , Oct 2017. doi : 10.1167/17.12.5 . URL http : //doi.org/466 10.1167/17.12.5.467 Wang , P. and Cottrell , G. W. Central and peripheral vision for scene recognition : A neurocomputa-468 tional modeling exploration . Journal of vision , 17 ( 4 ) :9–9 , 2017.469 Wang , Z. and Simoncelli , E. P. Translation insensitive image similarity in complex wavelet domain.470 In Proceedings. ( ICASSP ’ 05 ) . IEEE International Conference on Acoustics , Speech , and Signal471 Processing , 2005. , volume 2 , pp . ii–573 . IEEE , 2005.472 Wang , Z. , Simoncelli , E. P. , and Bovik , A. C. Multiscale structural similarity for image quality473 assessment . In The Thrity-Seventh Asilomar Conference on Signals , Systems & Computers , 2003,474 volume 2 , pp . 1398–1402 . Ieee , 2003.475 Wang , Z. , Bovik , A. C. , Sheikh , H. R. , and Simoncelli , E. P. Image quality assessment : from error476 visibility to structural similarity . IEEE transactions on image processing , 13 ( 4 ) :600–612 , 2004.477 Wu , K. , Wu , E. , and Kreiman , G. Learning scene gist with convolutional neural networks to improve478 object recognition . In 2018 52nd Annual Conference on Information Sciences and Systems ( CISS ) ,479 pp . 1–6 . IEEE , 2018.480 Xue , W. , Zhang , L. , Mou , X. , and Bovik , A. C. Gradient magnitude similarity deviation : A highly481 efficient perceptual image quality index . IEEE Transactions on Image Processing , 23 ( 2 ) :684–695,482 2013.483 Zhang , H. , Yu , Y. , Jiao , J. , Xing , E. , El Ghaoui , L. , and Jordan , M. Theoretically principled trade-484 off between robustness and accuracy . In International Conference on Machine Learning , pp.485 7472–7482 . PMLR , 2019.486 Zhang , L. , Zhang , L. , Mou , X. , and Zhang , D. Fsim : A feature similarity index for image quality487 assessment . IEEE transactions on Image Processing , 20 ( 8 ) :2378–2386 , 2011.488 Zhang , L. , Shen , Y. , and Li , H. Vsi : A visual saliency-induced index for perceptual image quality489 assessment . IEEE Transactions on Image processing , 23 ( 10 ) :4270–4281 , 2014.490 Zhang , R. , Isola , P. , Efros , A . A. , Shechtman , E. , and Wang , O . The unreasonable effectiveness of491 deep features as a perceptual metric . In Proceedings of the IEEE conference on computer vision492 and pattern recognition , pp . 586–595 , 2018.493 Zhou , B. , Khosla , A. , Lapedriza , A. , Oliva , A. , and Torralba , A . Object detectors emerge in deep494 scene cnns , 2014.495 Zhou , B. , Lapedriza , A. , Khosla , A. , Oliva , A. , and Torralba , A . Places : A 10 million image database496 for scene recognition . IEEE transactions on pattern analysis and machine intelligence , 40 ( 6 ) :497 1452–1464 , 2017.498 Ziemba , C. M. , Freeman , J. , Movshon , J . A. , and Simoncelli , E. P. Selectivity and tolerance for visual499 texture in macaque v2 . Proceedings of the National Academy of Sciences , 113 ( 22 ) : E3140–E3149,500 2016.501 Checklist502 1 . For all authors ... 503 ( a ) Do the main claims made in the abstract and introduction accurately reflect the paper ’ s504 contributions and scope ? [ Yes ] We have focused our experiments on implementing505 a two-stage model that has a texture-based foveation transform and compared it to a506 reference model ( a perceptual upper bound ) , and two matched resource systems : one507 foveated with blur and another one uniformly blurred.508 ( b ) Did you describe the limitations of your work ? [ Yes ] At the end of each Experiments509 Sub-Section we provide a mini-discussion of our work and how it fits or does not fit the510 literature . Mainly we provide limitations in the Discussion at the end ( See Section 4 ) 511 ( c ) Did you discuss any potential negative societal impacts of your work ? [ No ] To our512 knowledge , there are none.513 ( d ) Have you read the ethics review guidelines and ensured that your paper conforms to514 them ? [ Yes ] 515 2 . If you are including theoretical results ... 516 ( a ) Did you state the full set of assumptions of all theoretical results ? [ Yes ] We include517 only one supplementary theoretical result and proof in the AppendixB518 ( b ) Did you include complete proofs of all theoretical results ? [ Yes ] See above.519 3 . If you ran experiments ... 520 ( a ) Did you include the code , data , and instructions needed to reproduce the main exper-521 imental results ( either in the supplemental material or as a URL ) ? [ Yes ] See Supple-522 mentary Material ( that provides access to a URL ) 523 ( b ) Did you specify all the training details ( e.g. , data splits , hyperparameters , how they were524 chosen ) ? [ Yes ] These are reported brielfy in Section 3 , and in more detail through-out525 the Appendix.526 ( c ) Did you report error bars ( e.g. , with respect to the random seed after running experi-527 ments multiple times ) ? [ Yes ] All experiments were ran with paired initial noise seeds528 to control for matched initial conditions derived from SGD ( though the order in which529 the networks were exposed to images was different ) . All errorbars report 1 standard530 deviation , and these can be seen throughout Sections 3.2,3.3,3.4531 ( d ) Did you include the total amount of compute and the type of resources used ( e.g.,532 type of GPUs , internal cluster , or cloud provider ) ? [ Yes ] These are specified in the533 Appendix.534 4 . If you are using existing assets ( e.g. , code , data , models ) or curating/releasing new assets ... 535 ( a ) If your work uses existing assets , did you cite the creators ? [ Yes ] We use a re-partition536 of the Places2 Dataset which is cited.537 ( b ) Did you mention the license of the assets ? [ No ] Given that to our knowledge the538 Places2 dataset is widely known and free to use.539 ( c ) Did you include any new assets either in the supplemental material or as a URL ? [ No ] 540 As everything in the Supplementary Material/URL has been created/derived by us.541 ( d ) Did you discuss whether and how consent was obtained from people whose data you ’ re542 using/curating ? [ N/A ] We did not run any experiments with humans.543 ( e ) Did you discuss whether the data you are using/curating contains personally identifiable544 information or offensive content ? [ N/A ] We did not run any experiments with humans,545 and the scene classes we used were all publicly known and non-offensive places : e.g.546 ocean.547 5 . If you used crowdsourcing or conducted research with human subjects ... 548 ( a ) Did you include the full text of instructions given to participants and screenshots , if549 applicable ? [ N/A ] No human subjects were used.550 ( b ) Did you describe any potential participant risks , with links to Institutional Review551 Board ( IRB ) approvals , if applicable ? [ N/A ] No human subjects were used.552 ( c ) Did you include the estimated hourly wage paid to participants and the total amount553 spent on participant compensation ? [ N/A ] No human subjects were used.554
I thank the authors for a thoroughly written paper studying an important question for both machine learning and neuroscience. The authors propose a biologically inspired modification to CNN architectures by introducing foveation. Several thorough experiments are performed to assess the benefit of foveation with reasonable control transformations. The proposed modifications seem novel and discuss the relevant prior work in this domain. Appreciate the detailed discussion of all implementational specifics, I'm fairly confident about the correctness of the experiments performed and results presented.
SP:bdda04b701b73b4e5e5fec405a1b1219fbee5de7
Emergent Properties of Foveated Perceptual Systems
The goal of this work is to characterize the representational impact that foveation1 operations have for machine vision systems , inspired by the foveated human visual2 system , which has higher acuity at the center of gaze and texture-like encoding in3 the periphery . To do so , we introduce models consisting of a first-stage fixed image4 transform followed by a second-stage learnable convolutional neural network,5 and we varied the first stage component . The primary model has a foveated-6 textural input stage , which we compare to a model with foveated-blurred input7 and a model with spatially-uniform blurred input ( both matched for perceptual8 compression ) , and a final reference model with minimal input-based compression.9 We find that : 1 ) the foveated-texture model shows similar scene classification10 accuracy as the reference model despite its compressed input , with greater i.i.d.11 generalization than the other models ; 2 ) the foveated-texture model has greater12 sensitivity to high-spatial frequency information and greater robustness to occlusion,13 w.r.t the comparison models ; 3 ) both the foveated systems , show a stronger center14 image-bias relative to the spatially-uniform systems even with a weight sharing15 constraint . Critically , these results are preserved over different classical CNN16 architectures throughout their learning dynamics . Altogether , this suggests that17 foveation with peripheral texture-based computations yields an efficient , distinct,18 and robust representational format of scene information , and provides symbiotic19 computational insight into the representational consequences that texture-based20 peripheral encoding may have for processing in the human visual system , while also21 potentially inspiring the next generation of computer vision models via spatially-22 adaptive computation.23 1 Introduction24 In the human visual system , incoming light is sampled with different resolution across the retina , a25 stark contrast to machines that perceive images at uniform resolution . One account for the nature of26 this foveated ( spatially-varying ) array in humans is related purely to sensory efficiency ( biophysical27 constraints ) ( Land & Nilsson , 2012 ; Eckstein , 2011 ) , e.g. , there is only a finite amount of retinal28 ganglion cells ( RGC ) that can relay information from the retina to the Lateral Geniculate Nucleus29 ( LGN ) constrained by the thickness of the optic nerve . Thus it is “ more efficient ” to have a moveable30 high-acuity fovea , rather than a non-moveable uniform resolution retina when given a limited number31 of photoreceptors as suggested in Akbas & Eckstein ( 2017 ) . Machines , however do not have such32 wiring/resource constraints – and with their already proven success in computer vision ( LeCun et al.,33 2015 ) – this raises the question if a foveated inductive bias is necessary for vision at all.34 However , it is also possible that foveation plays a functional role at the representational level , which35 may confer perceptual advantages – as most computational approaches have mainly focused on36 saccade planning ( Geisler et al. , 2006 ; Mnih et al. , 2014 ; Elsayed et al. , 2019 ; Daucé et al. , 2020 ) .37 This idea has remained elusive in computer vision , but popular in vision science , and has been38 explored both psychophysically ( Loschky et al. , 2019 ) and computationally ( Poggio et al. , 2014 ; 39 Submitted to 35th Conference on Neural Information Processing Systems ( NeurIPS 2021 ) . Do not distribute . Cheung et al. , 2017 ; Han et al. , 2020 ) . Other works that have suggested representational advantages of40 foveation include the work of Pramod et al . ( 2018 ) , where blurring the image in the periphery gave an41 increase in object recognition performance of computer vision systems by reducing their false positive42 rate . In Wu et al . ( 2018 ) ’ s GistNet , directly introducing a dual-stream foveal-peripheral pathway in a43 neural network boosted object detection performance via scene gist and contextual cueing . Relatedly,44 the most well known example of work that has directly shown the advantage of peripheral vision45 for scene processing in humans is Wang & Cottrell ( 2017 ) ’ s dual stream CNN that modelled the46 results of Larson & Loschky ( 2009 ) with a log-polar transform and adaptive Gaussian blurring ( RGC-47 convergence ) . Taken together , these studies present support for the idea that foveation has useful48 representational consequences for perceptual systems . Further , these computational examples have49 symbiotic implications for understanding biological vision , indicating what the functional advantages50 of foveation in humans may be , via functional advantages in machine vision systems.51 Importantly , none of these studies introduce the notion of texture representation in the periphery – a52 key property of peripheral computation as posed in Rosenholtz ( 2016 ) . What functional consequences53 does this well-known texture-based coding in the visual periphery have , if any , on the nature of54 later stage visual representation ? Here we directly examine this question . Specifically , we introduce55 perceptual systems : as two-stage models that have an image transform stage followed by a deep56 convolutional neural network . The primary model class of interest possesses a first stage image57 transform that mimics texture-based foveation via visual crowding ( Levi , 2011 ; Pelli , 2008 ; Doerig58 et al. , 2019b , a ) in the periphery as shown in Figure 1 ( Deza et al. , 2019 ) , rather than Gaussian59 blurring ( Wang & Cottrell , 2017 ; Pramod et al. , 2018 ; Malkin et al. , 2020 ) or compression ( Patney60 et al. , 2016 ; Kaplanyan et al. , 2019 ) . These rendered images capture image statistics akin to those61 preserved in human peripheral vision , and resembling texture computation at the stage of area V2 , as62 argued in Freeman & Simoncelli ( 2011 ) ; Rosenholtz ( 2016 ) ; Wallis et al . ( 2019 ) .63 Our strategy is thus to compare in terms of generalization , robustness and bias these foveation-texture64 models to three other kinds of models . The first comparison model class – foveation-blur models –65 uses the same spatially-varying foveation operations but uses blur rather than texture based input.66 The second class – uniform-blur models – uses a blur operation uniformly over the input , with the67 level of blur set to match the perceptual compression rates of the foveation-texture nets . Finally , the68 last comparison model class is the reference , which has minimal distortion , and serves as a perceptual69 upper bound from which to assess the impact of these different first-stage transforms.70 Note that our approach is different from the one taken by Wang & Cottrell ( 2017 ) , who have built71 foveated models that fit results to human behavioural data like those of Larson & Loschky ( 2009 ) .72 Rather , our goal is to explore the emergent properties in CNNs with texture-based foveation on scene73 representation compared to their controls agnostic to any behavioural data or expected outcome.74 Naturally , the results of our experimental paradigm is symbiotic as it can shed light into both75 the importance of texture-based peripheral computation in humans , and could also suggest a new76 inductive bias for advanced machine perception in scenes.77 2 Perceptual Systems78 We define perceptual systems as two-stage models with an image transform ( stage 1 , f ( ◦ ) : RD →79 RD ) , that is relayed to a deep convolutional neural network ( stage 2 , g ( ◦ ) : RD → Rd ) . Note that the80 first transform stage is a fixed operation over the input image , while the second stage has learnable81 parameters . In general , the perceptual system S ( ◦ ) , with retinal image input I : RD is defined as:82 S ( I ) = g ( f ( I ) ) ( 1 ) Such two-stage models have been growing in popularity , and the reasons these models are designed to83 not be fully end-to-end differentiable is mainly to force one type of computation into the first-stage of a84 system such that the second-stage g ( ◦ ) must figure out how to capitalize on such forced transformation85 and thus assess its f ( ◦ ) representational consequences ( See Figure 2 ) . For example , Parthasarathy & 86 Simoncelli ( 2020 ) successfully imposed V1-like computation in stage 1 to explore the learned role87 of texture representation in later stages with a self-supervised objective , and Dapello et al . ( 2020 ) 88 found that fixing V1-like computation also at stage 1 aided adversarial robustness . At a higher level,89 our objective is similar where we would like to force a texture-based peripheral coding mechanism90 ( loosely inspired by V2 ; Ziemba et al. , 2016 ) at the first stage to check if the perceptual system ( now91 foveated ) will learn to pick-up on this newly made representation through g ( ◦ ) and make ‘ good ’ use92 of it potentially shedding light on the functionality hypothesis for machines and humans.93 2.1 Stage 1 : Image Transform94 To model the computations of a texture-based foveated visual system , we employed the model95 of Deza et al . ( 2019 ) ( henceforth Foveated-Texture Transform ) . This model is inspired by the metamer96 synthesis model of Freeman & Simoncelli ( 2011 ) , where new images are rendered to have locally97 matching texture statistics ( Portilla & Simoncelli , 2000 ; Balas et al. , 2009 ) in greater size pooling98 regions of the visual periphery with structural constraints . Analogously , the Deza et al . ( 2019 ) 99 Foveation Transform uses a foveated feed-forward style transfer ( Huang & Belongie , 2017 ) network100 to latently perturb the image in the direction of its locally matched texture ( see Figure 1 ) . Altogether,101 f : RD → RD is a convolutional auto-encoder that is non-foveated when the latent space is un-102 perturbed : f0 ( I ) = D ( E ( I ) ) , but foveated ( ◦Σ ) when the latent space is perturbed via localized style103 transfer : f∗ ( I ) = D ( EΣ ( I ) ) , for a given encoder-decoder ( E , D ) pair.104 Note that with proper calibration , the resulting distorted image can be a visual metamer ( for a human ) ,105 which is a carefully perturbed image perceptually indistinguishable from its reference image ( Freeman106 & Simoncelli , 2011 ; Rosenholtz et al. , 2012 ; Feather et al. , 2019 ; Vacher et al. , 2020 ) . However,107 importantly in the present work , we exaggerated the strength of these texture-driven distortions108 ( beyond the metameric boundary ) , as our aim here is to understand the implications of this kind109 of texturized peripheral input on later stage representations ( e.g . following a similar approach as110 Dapello et al . ( 2020 ) ) . By having an extreme manipulation , we reasoned this would accentuate the111 consequences of these distortions , making them more detectable in our subsequent experiments.112 2.2 Stage 2 : Convolutional Neural Network backbone113 The transformed images ( stage 1 ) are passed into a standard convolutional neural network architecture.114 Here we tested two different base architectures : AlexNet ( Krizhevsky et al. , 2012 ) , and ResNet18 ( He115 et al. , 2016 ) . The goal of running these experiments on two different hierarchically local architectures116 is to let us examine the consequences across all image transforms ( with our main focus towards117 texture-based foveation ) that are robust to these different network architectures . Further , this CNN118 backbone ( g : RD → Rd ) should not be viewed in the traditional way of an end-to-end input/output119 system where the input is the retinal image ( I ) , and the output is a one-hot vector encoding a d-class-120 label in Rd . Rather , the CNN ( g ) acts as a loose proxy of higher stages of visual processing ( as it121 receives input from f ) , analogous to the 2-stage model of Lindsey et al . ( 2019 ) .122 2.3 Critical Manipulations : Foveated vs Non-Foveated Perceptual Systems123 Now , we can define the first two of the four perceptual systems that will perform 20-way scene124 categorization : Foveation-Texture , receives an image input , applies the foveation-texture transform125 f∗ ( ◦ ) , and relays it through the CNN g ( ◦ ) . Similarly , Reference performs a non-foveated transform126 f0 ( ◦ ) , where images are sent through the same convolutional auto-encoder D ( E ( I ) ) of f∗ ( ◦ ) , but127 with the parameter that determines the degree of texture style transfer set to 0 – producing an upper-128 bounded , compressed and non-foveated reference image – then relayed through the CNN g ( ◦ ) . Both129 of these systems are depicted in Figure 2 ( A ) . As the foveation-texture model has less information130 from the input , relative to the reference networks , we next designed two further comparison models131 which have a comparable amount of information after the input stage , but with different amounts of132 blurring in the stage 1 operations . To create matched-resources systems , our broad approach was to133 use a Rate-Distortion ( RD ) optimization procedure ( Ballé et al. , 2016 ) to match information between134 the stage 1 operations , given the SSIM ( Wang et al. , 2004 ) image quality assessment ( IQA ) metric.135 Specifically , to create matched-resource Uniform-Blur , we identified the standard deviation of the136 Gaussian blurring kernel ( the ‘ distortion ’ D ) , such that we could render a perceptually resource-137 matched Gaussian blurred image – w.r.t Reference – that matches the perceptual transmission ‘ rate ’ 138 R of Foveation-Texture via the SSIM perceptual metric ( Wang et al. , 2004 ) . This procedure yields a139 model class with uniform blur across the image , but with matched stage 1 information content as the140 Foveation-Texture . And , to create matched-resource Foveation-Blur , we carried our this same RD141 optimization pipeline per eccentricity ring ( assuming homogeneity across pooling regions at the same142 eccentricity ) , thus finding a set of blurring coefficients that vary as a function of eccentricity . This143 procedures yielded a different matched-resource model class , this time with spatially-varying blur.144 Figure 3 ( B ) summarizes our solution to this problem . Details of the RD Optimization are presented145 in Appendix A.146 Ultimately , it is important to note that the selection of the perceptual metric ( SSIM in our case ) ,147 plays a role in this optimization procedure , and sets the context in which we can call a network148 “ resource-matched ” . We selected SSIM given its monotonic relationship of distortions to human149 perceptual judgements , symmetric upper-bounded nature , sensitivity to contrast , local structure and150 spatial frequency , and popularity in the Image Quality Assessment ( IQA ) community . However151 to anticipate any possible discrepancy in the interpretability of our future results , we additionally152 computed the Mean Square Error ( MSE ) , MS-SSIM , and 11 other IQA metrics as recently explored153 in Ding et al . ( 2020 ) to compare all other image transforms to the Reference on the testing set.154 Our logic is the following : if the MSE is greater ( ↑ ) for Foveation-Texture compared to Foveation-155 Blur and Uniform-Blur , then the current distortion levels place Foveation-Texture at a resource156 ‘ disadvantage ’ relative to the other transforms , and any interesting results would not only hold but157 also be strengthened . This same logic applies to the other IQA metrics contingent on their direction158 of greater distortion . Indeed , these patterns of results were evident across IQA metrics – except those159 tolerant to texture such as DISTS ( Ding et al. , 2020 ) – as shown in Table 1 , and Appendix C.160 3 Experiments161 Altogether , the 4 previously introduced perceptual systems162 help us answer three key questions that we should have163 in mind throughout the rest of the paper : 1 ) Foveation-164 Texture vs Reference will tell us how a texture-based165 foveation mechanism will compare to its perceptual upper-166 bound – shedding light into arguments about computa-167 tional efficiency . 2 ) Foveation-Texture vs Foveation-Blur168 will tell us if any potentially interesting pattern of results169 is due to the type/stage of foveation . This will help us170 measure the contributions of the adaptive texture coding171 vs adaptive gaussian blurring ; 3 ) Foveation-Texture vs172 Uniform-Blur will tell us how do these perceptual systems173 ( one foveated , and the other one not ) behave when allo-174 cated with a fixed number of perceptual resources under175 certain assumptions – potentially shedding light on why176 biological organisms like humans have foveated texture-177 based computation in the visual field instead of uniform178 spatial processing like modern machines.179 Dataset : All previously introduced models were trained180 to perform 20-way scene categorization . Scene categories181 were selected from the Places2 dataset ( Zhou et al. , 2017 ) ,182 and were re-partitioned into a new 4500 images per cate-183 gory for training , 250 per category for validation , and 250184 per category for testing . The categories included were:185 aquarium , badlands , bedroom , bridge , campus , corridor , forest path , highway , hospital , industrial186 area , japanese garden , kitchen , mansion , mountain , ocean , office , restaurant , skyscraper , train interior,187 waterfall . Samples of these scenes coupled with their image transforms can be seen in Figure 4.188 Networks : Training : Convolutional neural networks of the stage 2 of each perceptual system were189 trained which resulted in 40 image-transform based networks per architecture ( AlexNet/ResNet18 ) :190 10 Foveation-Texture , 10 Reference , 10 Uniform-Blur , 10 Foveation-Blur ; totalling 80 trained191 networks to compute relevant error bars shown in all figures ( standard deviations , not standard errors ) 192 and to reduce effects of randomness driven by the particular network initialization . All systems were193 paired such that their stage 2 architectures g ( ◦ ) started with the same random weight initialization194 prior to training . Testing : The networks of each perceptual system were tested on the same type of195 image distribution they were trained on . Learning Dynamics : Available in Appendix H.196 3.1 Texture-based foveation provides greater i.i.d . generalization than Blur-based foveation197 How well does the foveation-texture stage classify scene images ( i.i.d . generalization ) compared to198 the other matched-resource models that use blurring and the reference ? The results can be seen in199 Figure 5 . Each bars ’ height reflects overall accuracy for each of the 10 neural network backbone200 runs ( g ( ◦ ) ) per system , with a square marker at the top indicating the i.i.d . accuracy . We found that201 Foveation-Texture had similar i.i.d . performance to the Reference – which is the the undistorted202 perceptual upper bound , and greater performance than both Uniform-Blur and Foveation-Blur . Thus203 the compression induced by foveated-texture generally maintains scene category information.204 We next performed a contrived experiment where we tested how well each perceptual system could205 classify the stage 1 outputs of the other models . For example , we showed a set of foveated blurred206 images to a network trained on foveated texture images . This experiment is in essence a test of207 out-of-distribution ( o.o.d . ) generalization . The results of these tests are also shown in Figure 5 . For208 each model , the classification accuracy for the inputs from the other stage 1 images is indicated by209 the height of the different colored diamonds , where the color corresponds to the stage 1 operation.210 This experiment yielded a rather complex set of patterns , that even differed depending on the211 architecture ( AlexNet vs ResNet18 as g ( ◦ ) ) . Generally , the Foveation-Texture model had a similar212 profile of generalization as the Reference model . However , the networks trained with different types213 of blur ( Uniform-Blur & Foveated-Blur ) in some cases showed very high o.o.d . generalization –214 though once again this is contingent on g ( ◦ ) .215 Unraveling the underlying causes to understand this last set of results sets the stage for our experiments216 in the rest of this section . So far it seems like Foveation-Texture has learned to properly capitalize the217 texture information in the periphery and still out-perform all other matched-resource systems even if218 heavily penalized under several IQA metrics ( Table 1 ) – highlighting the critical differences in texture219 vs blur for scene processing . As for the interaction of Uniform-Blur with g ( ◦ ) , is is likely that the220 residual connections are counter-productive to o.o.d . generalization ( or it has overfit ) . Interestingly,221 humans have a combination of texture and adaptive-gaussian based peripheral computation ( Ehinger222 & Rosenholtz , 2016 ) , so future work should look into the effects of continual learning , joint-training223 or a combined image transform ( Texture + Blur ) to merge gains of both i.i.d and o.o.d generalization.224 3.2 Texture-based foveated systems preserve greater high-spatial frequency sensitivity225 We next examined whether the learned feature representations of these models are more reliant on low226 or high pass spatial frequency information . To do so , we filtered the testing image set at multiple levels227 to create both high pass and low pass frequency stimuli and assessed scene-classification performance228 over these images for all models , as shown in Figure 6 . Low pass frequency stimuli were rendered by229 convolving a Gaussian filter of standard deviation σ = [ 0 , 1 , 3 , 5 , 7 , 10 , 15 , 40 ] pixels on the foveation230 transform ( f0 , f̂0 , f∗ , f̂∗ ) outputs . Similarly , the high pass stimuli was computed by subtracting the231 reference image from its low pass filtered version with σ = [ ∞ , 3 , 1.5 , 1 , 0.7 , 0.55 , 0.45 , 0.4 ] pixels232 and adding a residual . These are the same values used in the experiments of Geirhos et al . ( 2019 ) .233 We found that Foveation-Texture and Reference trained networks were more sensitive to High234 Pass Frequency information , while Foveation-Blur and Uniform-Blur were selective to Low Pass235 Frequency stimuli . Although one may naively assume that this is an expected result – as both236 Foveation-Blur and Uniform-Blur networks are exposed to a blurring procedure – it is important to237 note that : 1 ) the foveal resolution has been preserved between Foveation-Texture and Foveation-Blur238 ( See Fig . 4 ) , thus high spatial frequency sensitivity could have still predominated in Foveation-Blur239 but it did not ( though see Fig . 6 A2/B2 where these high pass Gabors are still learned , implying240 that higher layers in g ( ◦ ) overshadow their computation ) ; and 2 ) Foveation-Texture could have241 also learned to develop low spatial frequency sensitivity given the crowding/texture-like peripheral242 distortion , but this was not the case ( likely due to the weight sharing constraint embedded in the243 CNN architecture Elsayed et al. , 2020 ) . Finally , the robustness to low-pass filtering of Foveation-Blur244 suggests that foveation via adaptive gaussian blurring may implicitly contribute to scale-invariance as245 also shown in Poggio et al . ( 2014 ) ; Cheung et al . ( 2017 ) ; Han et al . ( 2020 ) .246 3.3 Texture-based foveation develops greater robustness to occlusion247 We next examined how all perceptual systems could classify scene information under conditions248 of visual field loss , either from left to right ( left2right ) , top to bottom ( top2bottom ) , center part of249 the image ( scotoma ) , or the periphery ( glaucoma ) . This manipulation lets us examine the degree250 to which learned representations relying on different parts of the image to classify scene categories.251 Critically , here we apply the occlusion after the stage 1 operation . The results are shown in Figure 7.252 Overall we found that , across all types of occlusion the Foveation-Texture modules have greater ro-253 bustness to occlusion than both the Foveation-Blur and Uniform-Blur models . Further , the Foveation-254 Texture models have nearly equivalent performance to the Reference . In contrast , both models with255 blurring , whether uniformly or in a spatially-varying way , were far worse at classifying scenes under256 conditions of visual field loss . These results highlight that the texture-based information content257 captured by the foveation-texture nets preserves scene category content in dramatically different way258 than simple lower-resolution sampling – perhaps using the texture-bias ( Geirhos et al. , 2019 ) in their259 favor ; as humans too use texture as their classification strategy for scenes ( Renninger & Malik , 2004 ) .260 In addition , the Foveation-Texture model is not overfitting . As recent work has suggested an Accuracy261 vs Robustness trade-off where networks trained to outperform under the i.i.d . generalization condition262 will do worse under other perceptual tasks – mainly adversarial ( Zhang et al. , 2019 ) – we did not263 observe such trade-off and a greater accuracy did not imply lower robustness to occlusion.264 3.4 Foveated systems learn a stronger center image bias than non-foveated systems265 It is possible that foveated systems weight visual information strongly in the foveal region than the266 peripheral region as hinted by our occlusion results ( the different rate of decay for the accuracy curves267 in the Scotoma and Glaucoma conditions ) . To resolve this question , we conducted an experiment268 where we created a windowed cue-conflict stimuli where we re-rendered our set of testing images269 with one image category in the fovea , and another one in the periphery ( all aligned with a different270 class systematically ; ex : aquarium with badlands ) . We also had an additional condition where the271 conflicting cue was now square-like and uniformly and randomly paired with a conflicting scene272 class and more finely sampled . We then systematically varied the fovea-periphery visual area ratios273 & re-examined classification accuracy for both the foveal and peripheral scenes ( Figure 8 ) .274 We found that the Foveation-Texture and Foveation-Blur transform imposed the networks g ( ◦ ) to275 learn to weigh information in the center of the image stronger than Reference & Uniform-Blur for276 scene categorization . A qualitative way of seeing this foveal-bias is by checking the foveal/peripheral277 ratio where these two accuracy lines cross . The more leftward the cross-over point ( ⊗ ) , the higher the278 foveal bias ( highlighted through the vertical bars ) . This result was unexpected as we initially predicted279 that g ( ◦ ) would weigh the peripheral information stronger as it has been implicitly regularized through280 a distortion . However this was not the case and our findings are similar to Wang & Cottrell ( 2017 ) 281 who showed this foveal bias on a foveated system with adaptive blur with a dual-stream neural282 network . Thus , these results indicate that the spatially varying computation from center to periphery283 is mainly responsible for the development of a center image bias even with a weight sharing constraint.284 Furthermore , it is possible that one of the functions of any spatially-varying coding mechanisms285 in the visual field is to enforce the perceptual system to attend on the foveal region – avoiding the286 shortcut of learning to attend the entire visual field if unnecessary ( Geirhos et al. , 2020 ) .287 4 Discussion288 The present work was designed to probe the impact of foveated texture-based input representations in289 machine vision systems . To do this we specifically compared the learned perceptual signatures in290 the second-stage of visual processing across a set of of networks trained on other image transforms.291 We found that when comparing Foveation-Texture to their matched-resource models that differed in292 computation : Foveation-Blur ( foveated w/ adaptive gaussian blur ) and Uniform-Blur ( non-foveated293 w/ uniform blur ) – that peripheral texture encoding did lead to specific representational signatures,294 particularly greater i.i.d generalization , preservation of high-spatial frequency sensitivity , and ro-295 bustness to occlusion – even as high as its perceptual upper bound ( Reference ) . We also found that296 foveation ( in general ) seems to induce a focusing mechanism , servicing the foveal/central region –297 whereas neither a perceptually upper-bounded system ( Reference ) or a non-foveated compressed298 system ( Uniform-Blur ) did not develop as strongly.299 The particular consequences of our foveation stage raises interesting future directions about what300 computational advantages could arise when trained on object categorization ( Pramod et al. , 2018 ) 301 coupled with eye-movements ( Akbas & Eckstein , 2017 ; Deza et al. , 2017 ) , as objects are typically302 centered in view and have different hierarchical/compositional priors than scenes ( Zhou et al . ( 2014 ) ; 303 Deza et al . ( 2020 ) ) in addition to different processing mechanisms ( Renninger & Malik ( 2004 ) ; 304 Ehinger & Rosenholtz ( 2016 ) ) . We are currently exploring the impact of these foveated texture-based305 representational signatures on shape vs texture bias for object recognition similar to Geirhos et al.306 ( 2019 ) and Hermann et al . ( 2020 ) , and assessing their interaction with scene representation.307 Further , a future direction is investigating the effects of texture-based foveation to adversarial308 robustness . Motivated by the recent work of Dapello et al . ( 2020 ) which has shown promise of309 adversarial robustness via enforcing stochasticity and V1-like computation by obeying the Nyquist310 sampling frequency of these filters w.r.t the image ( Serre et al. , 2007 ) in addition to a natural gamut of311 orientations and frequencies as studied in De Valois et al . ( 1982 ) , it raises the question of how much312 we can further push for robustness in hybrid perceptual systems like these , drawing on even more313 biological mechanisms . Works such as Luo et al . ( 2015 ) and recently Reddy et al . ( 2020 ) ; Kiritani & 314 Ono ( 2020 ) have already taken steps in this direction by coupling fixations with a spatially-varying315 retina . However , the representational impact of texture-based foveation on adversarial robustness,316 and its symbiotic implication for human vision still remains an open question.317 References318 Akbas , E. and Eckstein , M. P. Object detection through search with a foveated visual system . PLoS319 computational biology , 13 ( 10 ) : e1005743 , 2017.320 Balas , B. , Nakano , L. , and Rosenholtz , R. A summary-statistic representation in peripheral vision321 explains visual crowding . Journal of vision , 9 ( 12 ) :13–13 , 2009.322 Ballé , J. , Laparra , V. , and Simoncelli , E. P. End-to-end optimized image compression . arXiv preprint323 arXiv:1611.01704 , 2016.324 Cheung , B. , Weiss , E. , and Olshausen , B . Emergence of foveal image sampling from learning to325 attend in visual scenes . International Conference on Learning Representations ( ICLR ) , 2017.326 Dapello , J. , Marques , T. , Schrimpf , M. , Geiger , F. , Cox , D. D. , and DiCarlo , J. J. Simulating a327 primary visual cortex at the front of cnns improves robustness to image perturbations . BioRxiv,328 2020.329 Daucé , E. , Albiges , P. , and Perrinet , L. U . A dual foveal-peripheral visual processing model330 implements efficient saccade selection . Journal of Vision , 20 ( 8 ) :22–22 , 2020.331 De Valois , R. L. , Yund , E. W. , and Hepler , N. The orientation and direction selectivity of cells in332 macaque visual cortex . Vision research , 22 ( 5 ) :531–544 , 1982.333 Deza , A. and Eckstein , M. Can peripheral representations improve clutter metrics on complex scenes ? 334 In Advances in Neural Information Processing Systems , pp . 2847–2855 , 2016.335 Deza , A. , Peters , J. R. , Taylor , G. S. , Surana , A. , and Eckstein , M. P. Attention allocation aid for336 visual search . In Proceedings of the 2017 CHI Conference on Human Factors in Computing337 Systems , pp . 220–231 , 2017.338 Deza , A. , Jonnalagadda , A. , and Eckstein , M. P. Towards metamerism via foveated style transfer . In339 International Conference on Learning Representations , 2019 . URL https : //openreview.net/340 forum ? id=BJzbG20cFQ.341 Deza , A. , Liao , Q. , Banburski , A. , and Poggio , T. Hierarchically local tasks and deep convolutional342 networks . CBMM Memo , 2020.343 Ding , K. , Ma , K. , Wang , S. , and Simoncelli , E. Image quality assessment : Unifying structure and344 texture similarity . IEEE Transactions on Pattern Analysis and Machine Intelligence , 2020.345 Ding , K. , Ma , K. , Wang , S. , and Simoncelli , E. P. Comparison of Image Quality Models for346 Optimization of Image Processing Systems . arXiv e-prints , art . arXiv:2005.01338 , May 2020.347 Doerig , A. , Bornet , A. , Choung , O. H. , and Herzog , M. H. Crowding reveals fundamental differences348 in local vs. global processing in humans and machines . bioRxiv , 2019a . doi : 10.1101/744268.349 URL https : //www.biorxiv.org/content/early/2019/08/23/744268.350 Doerig , A. , Bornet , A. , Rosenholtz , R. , Francis , G. , Clarke , A. M. , and Herzog , M. H. Beyond351 bouma ’ s window : How to explain global aspects of crowding ? PLoS computational biology , 15352 ( 5 ) : e1006580 , 2019b.353 Eckstein , M. P. Visual search : A retrospective . Journal of vision , 11 ( 5 ) :14–14 , 2011.354 Eckstein , M. P. , Koehler , K. , Welbourne , L. E. , and Akbas , E. Humans , but not deep neural networks,355 often miss giant targets in scenes . Current Biology , 27 ( 18 ) :2827–2832 , 2017.356 Ehinger , K. A. and Rosenholtz , R. A general account of peripheral encoding also predicts scene357 perception performance . Journal of Vision , 16 ( 2 ) :13–13 , 2016.358 Elsayed , G. , Kornblith , S. , and Le , Q. V. Saccader : Improving accuracy of hard attention models for359 vision . In Advances in Neural Information Processing Systems , pp . 700–712 , 2019.360 Elsayed , G. , Ramachandran , P. , Shlens , J. , and Kornblith , S. Revisiting spatial invariance with361 low-rank local connectivity . In International Conference on Machine Learning , pp . 2868–2879.362 PMLR , 2020.363 Feather , J. , Durango , A. , Gonzalez , R. , and McDermott , J. Metamers of neural networks reveal diver-364 gence from human perceptual systems . In Wallach , H. , Larochelle , H. , Beygelzimer , A. , d'Alché-365 Buc , F. , Fox , E. , and Garnett , R . ( eds . ) , Advances in Neural Information Processing Systems366 32 , pp . 10078–10089 . Curran Associates , Inc. , 2019 . URL http : //papers.nips.cc/paper/367 9198-metamers-of-neural-networks-reveal-divergence-from-human-perceptual-systems.368 pdf.369 Freeman , J. and Simoncelli , E. Metamers of the ventral stream . Nature neuroscience , 14 ( 9 ) :370 1195–1201 , 2011.371 Fridman , L. , Jenik , B. , Keshvari , S. , Reimer , B. , Zetzsche , C. , and Rosenholtz , R. Sideeye : A genera-372 tive neural network based simulator of human peripheral vision . arXiv preprint arXiv:1706.04568,373 2017.374 Gatys , L. A. , Ecker , A. S. , and Bethge , M. Image style transfer using convolutional neural networks.375 In Proceedings of the IEEE conference on computer vision and pattern recognition , pp . 2414–2423,376 2016.377 Geirhos , R. , Temme , C. R. , Rauber , J. , Schütt , H. H. , Bethge , M. , and Wichmann , F. A. Generalisation378 in humans and deep neural networks . In Advances in Neural Information Processing Systems , pp.379 7538–7550 , 2018.380 Geirhos , R. , Rubisch , P. , Michaelis , C. , Bethge , M. , Wichmann , F. A. , and Brendel , W. Imagenet-381 trained CNNs are biased towards texture ; increasing shape bias improves accuracy and robustness.382 In International Conference on Learning Representations , 2019 . URL https : //openreview.383 net/forum ? id=Bygh9j09KX.384 Geirhos , R. , Jacobsen , J.-H. , Michaelis , C. , Zemel , R. , Brendel , W. , Bethge , M. , and Wichmann , F. A.385 Shortcut learning in deep neural networks . arXiv preprint arXiv:2004.07780 , 2020.386 Geisler , W. S. and Perry , J. S. Real-time foveated multiresolution system for low-bandwidth video387 communication . In Human vision and electronic imaging III , volume 3299 , pp . 294–305 . Interna-388 tional Society for Optics and Photonics , 1998.389 Geisler , W. S. , Perry , J. S. , and Najemnik , J . Visual search : The role of peripheral information390 measured using gaze-contingent displays . Journal of Vision , 6 ( 9 ) :1–1 , 2006.391 Han , Y. , Roig , G. , Geiger , G. , and Poggio , T. Scale and translation-invariance for novel objects in392 human vision . Scientific Reports , 10 ( 1 ) :1–13 , 2020.393 He , K. , Zhang , X. , Ren , S. , and Sun , J . Deep residual learning for image recognition . In Proceedings394 of the IEEE conference on computer vision and pattern recognition , pp . 770–778 , 2016.395 Hermann , K. L. , Chen , T. , and Kornblith , S. The origins and prevalence of texture bias in convolutional396 neural networks . Neural Information Processing Systems , 2020.397 Huang , X. and Belongie , S. Arbitrary style transfer in real-time with adaptive instance normalization.398 In Proceedings of the IEEE International Conference on Computer Vision , pp . 1501–1510 , 2017.399 Kaplanyan , A. S. , Sochenov , A. , Leimkühler , T. , Okunev , M. , Goodall , T. , and Rufo , G. Deepfovea:400 neural reconstruction for foveated rendering and video compression using learned statistics of401 natural videos . ACM Transactions on Graphics ( TOG ) , 38 ( 6 ) :1–13 , 2019.402 Kiritani , T. and Ono , K. Recurrent attention model with log-polar mapping is robust against403 adversarial attacks . arXiv preprint arXiv:2002.05388 , 2020.404 Krizhevsky , A. , Sutskever , I. , and Hinton , G. E. Imagenet classification with deep convolutional405 neural networks . In Advances in neural information processing systems , pp . 1097–1105 , 2012.406 Land , M. F. and Nilsson , D.-E . Animal eyes . Oxford University Press , 2012.407 Laparra , V. , Ballé , J. , Berardino , A. , and Simoncelli , E. P. Perceptual image quality assessment using408 a normalized laplacian pyramid . Electronic Imaging , 2016 ( 16 ) :1–6 , 2016.409 Larson , A. M. and Loschky , L. C. The contributions of central versus peripheral vision to scene gist410 recognition . Journal of Vision , 9 ( 10 ) :6–6 , 2009.411 Larson , E. C. and Chandler , D. M. Most apparent distortion : full-reference image quality assessment412 and the role of strategy . Journal of electronic imaging , 19 ( 1 ) :011006 , 2010.413 LeCun , Y. , Bengio , Y. , and Hinton , G. Deep learning . nature , 521 ( 7553 ) :436 , 2015.414 Levi , D. M. Visual crowding . Current Biology , 21 ( 18 ) : R678–R679 , 2011.415 Lindsey , J. , Ocko , S. A. , Ganguli , S. , and Deny , S. The effects of neural resource constraints on416 early visual representations . In International Conference on Learning Representations , 2019 . URL417 https : //openreview.net/forum ? id=S1xq3oR5tQ.418 Loschky , L. C. , Szaffarczyk , S. , Beugnet , C. , Young , M. E. , and Boucart , M. The contributions of419 central and peripheral vision to scene-gist recognition with a 180 visual field . Journal of Vision , 19420 ( 5 ) :15–15 , 2019.421 Luo , Y. , Boix , X. , Roig , G. , Poggio , T. , and Zhao , Q. Foveation-based mechanisms alleviate422 adversarial examples . arXiv preprint arXiv:1511.06292 , 2015.423 Malkin , E. , Deza , A. , and tomaso a poggio . { CUDA } -optimized real-time rendering of a foveated424 visual system . In NeurIPS 2020 Workshop SVRHM , 2020 . URL https : //openreview.net/425 forum ? id=ZMsqkUadtZ7.426 Mnih , V. , Heess , N. , Graves , A. , et al . Recurrent models of visual attention . In Advances in neural427 information processing systems , pp . 2204–2212 , 2014.428 Parthasarathy , N. and Simoncelli , E. P. Self-supervised learning of a biologically-inspired visual429 texture model . arXiv preprint arXiv:2006.16976 , 2020.430 Patney , A. , Salvi , M. , Kim , J. , Kaplanyan , A. , Wyman , C. , Benty , N. , Luebke , D. , and Lefohn , A.431 Towards foveated rendering for gaze-tracked virtual reality . ACM Transactions on Graphics ( TOG ) ,432 35 ( 6 ) :179 , 2016.433 Pelli , D. G. Crowding : A cortical constraint on object recognition . Current opinion in neurobiology,434 18 ( 4 ) :445–451 , 2008.435 Poggio , T. , Mutch , J. , and Isik , L. Computational role of eccentricity dependent cortical magnification.436 arXiv preprint arXiv:1406.1770 , 2014.437 Portilla , J. and Simoncelli , E. P. A parametric texture model based on joint statistics of complex438 wavelet coefficients . International journal of computer vision , 40 ( 1 ) :49–70 , 2000.439 Pramod , R. T. , Katti , H. , and Arun , S. P. Human peripheral blur is optimal for object recognition.440 arXiv preprint arXiv:1807.08476 , 2018.441 Reddy , M. V. , Banburski , A. , Pant , N. , and Poggio , T. Biologically inspired mechanisms for442 adversarial robustness . arXiv preprint arXiv:2006.16427 , 2020.443 Renninger , L. W. and Malik , J . When is scene identification just texture recognition ? Vision research,444 44 ( 19 ) :2301–2311 , 2004.445 Rosenholtz , R. Capabilities and limitations of peripheral vision . Annual Review of Vision Science , 2:446 437–457 , 2016.447 Rosenholtz , R. , Huang , J. , Raj , A. , Balas , B. J. , and Ilie , L. A summary statistic representation in448 peripheral vision explains visual search . Journal of vision , 12 ( 4 ) :14–14 , 2012.449 Russakovsky , O. , Deng , J. , Su , H. , Krause , J. , Satheesh , S. , Ma , S. , Huang , Z. , Karpathy , A. , Khosla,450 A. , Bernstein , M. , et al . Imagenet large scale visual recognition challenge . International journal of451 computer vision , 115 ( 3 ) :211–252 , 2015.452 Serre , T. , Wolf , L. , Bileschi , S. , Riesenhuber , M. , and Poggio , T. Robust object recognition with453 cortex-like mechanisms . IEEE transactions on pattern analysis and machine intelligence , 29 ( 3 ) :454 411–426 , 2007.455 Sheikh , H. R. and Bovik , A. C. Image information and visual quality . IEEE Transactions on image456 processing , 15 ( 2 ) :430–444 , 2006.457 Shumikhin , M. M. A. Quantitative measures of crowding susceptibility in peripheral vision for large458 datasets . PhD thesis , Massachusetts Institute of Technology , 2020.459 Vacher , J. , Davila , A. , Kohn , A. , and Coen-Cagli , R. Texture interpolation for probing visual460 perception . Advances in Neural Information Processing Systems , 33 , 2020.461 Wallis , T. S. , Funke , C. M. , Ecker , A. S. , Gatys , L. A. , Wichmann , F. A. , and Bethge , M. Image462 content is more important than bouma ’ s law for scene metamers . eLife , 8 : e42512 , 2019.463 Wallis , T. S. A. , Funke , C. M. , Ecker , A. S. , Gatys , L. A. , Wichmann , F. A. , and Bethge , M. A464 parametric texture model based on deep convolutional features closely matches texture appearance465 for humans . Journal of Vision , 17 ( 12 ) , Oct 2017. doi : 10.1167/17.12.5 . URL http : //doi.org/466 10.1167/17.12.5.467 Wang , P. and Cottrell , G. W. Central and peripheral vision for scene recognition : A neurocomputa-468 tional modeling exploration . Journal of vision , 17 ( 4 ) :9–9 , 2017.469 Wang , Z. and Simoncelli , E. P. Translation insensitive image similarity in complex wavelet domain.470 In Proceedings. ( ICASSP ’ 05 ) . IEEE International Conference on Acoustics , Speech , and Signal471 Processing , 2005. , volume 2 , pp . ii–573 . IEEE , 2005.472 Wang , Z. , Simoncelli , E. P. , and Bovik , A. C. Multiscale structural similarity for image quality473 assessment . In The Thrity-Seventh Asilomar Conference on Signals , Systems & Computers , 2003,474 volume 2 , pp . 1398–1402 . Ieee , 2003.475 Wang , Z. , Bovik , A. C. , Sheikh , H. R. , and Simoncelli , E. P. Image quality assessment : from error476 visibility to structural similarity . IEEE transactions on image processing , 13 ( 4 ) :600–612 , 2004.477 Wu , K. , Wu , E. , and Kreiman , G. Learning scene gist with convolutional neural networks to improve478 object recognition . In 2018 52nd Annual Conference on Information Sciences and Systems ( CISS ) ,479 pp . 1–6 . IEEE , 2018.480 Xue , W. , Zhang , L. , Mou , X. , and Bovik , A. C. Gradient magnitude similarity deviation : A highly481 efficient perceptual image quality index . IEEE Transactions on Image Processing , 23 ( 2 ) :684–695,482 2013.483 Zhang , H. , Yu , Y. , Jiao , J. , Xing , E. , El Ghaoui , L. , and Jordan , M. Theoretically principled trade-484 off between robustness and accuracy . In International Conference on Machine Learning , pp.485 7472–7482 . PMLR , 2019.486 Zhang , L. , Zhang , L. , Mou , X. , and Zhang , D. Fsim : A feature similarity index for image quality487 assessment . IEEE transactions on Image Processing , 20 ( 8 ) :2378–2386 , 2011.488 Zhang , L. , Shen , Y. , and Li , H. Vsi : A visual saliency-induced index for perceptual image quality489 assessment . IEEE Transactions on Image processing , 23 ( 10 ) :4270–4281 , 2014.490 Zhang , R. , Isola , P. , Efros , A . A. , Shechtman , E. , and Wang , O . The unreasonable effectiveness of491 deep features as a perceptual metric . In Proceedings of the IEEE conference on computer vision492 and pattern recognition , pp . 586–595 , 2018.493 Zhou , B. , Khosla , A. , Lapedriza , A. , Oliva , A. , and Torralba , A . Object detectors emerge in deep494 scene cnns , 2014.495 Zhou , B. , Lapedriza , A. , Khosla , A. , Oliva , A. , and Torralba , A . Places : A 10 million image database496 for scene recognition . IEEE transactions on pattern analysis and machine intelligence , 40 ( 6 ) :497 1452–1464 , 2017.498 Ziemba , C. M. , Freeman , J. , Movshon , J . A. , and Simoncelli , E. P. Selectivity and tolerance for visual499 texture in macaque v2 . Proceedings of the National Academy of Sciences , 113 ( 22 ) : E3140–E3149,500 2016.501 Checklist502 1 . For all authors ... 503 ( a ) Do the main claims made in the abstract and introduction accurately reflect the paper ’ s504 contributions and scope ? [ Yes ] We have focused our experiments on implementing505 a two-stage model that has a texture-based foveation transform and compared it to a506 reference model ( a perceptual upper bound ) , and two matched resource systems : one507 foveated with blur and another one uniformly blurred.508 ( b ) Did you describe the limitations of your work ? [ Yes ] At the end of each Experiments509 Sub-Section we provide a mini-discussion of our work and how it fits or does not fit the510 literature . Mainly we provide limitations in the Discussion at the end ( See Section 4 ) 511 ( c ) Did you discuss any potential negative societal impacts of your work ? [ No ] To our512 knowledge , there are none.513 ( d ) Have you read the ethics review guidelines and ensured that your paper conforms to514 them ? [ Yes ] 515 2 . If you are including theoretical results ... 516 ( a ) Did you state the full set of assumptions of all theoretical results ? [ Yes ] We include517 only one supplementary theoretical result and proof in the AppendixB518 ( b ) Did you include complete proofs of all theoretical results ? [ Yes ] See above.519 3 . If you ran experiments ... 520 ( a ) Did you include the code , data , and instructions needed to reproduce the main exper-521 imental results ( either in the supplemental material or as a URL ) ? [ Yes ] See Supple-522 mentary Material ( that provides access to a URL ) 523 ( b ) Did you specify all the training details ( e.g. , data splits , hyperparameters , how they were524 chosen ) ? [ Yes ] These are reported brielfy in Section 3 , and in more detail through-out525 the Appendix.526 ( c ) Did you report error bars ( e.g. , with respect to the random seed after running experi-527 ments multiple times ) ? [ Yes ] All experiments were ran with paired initial noise seeds528 to control for matched initial conditions derived from SGD ( though the order in which529 the networks were exposed to images was different ) . All errorbars report 1 standard530 deviation , and these can be seen throughout Sections 3.2,3.3,3.4531 ( d ) Did you include the total amount of compute and the type of resources used ( e.g.,532 type of GPUs , internal cluster , or cloud provider ) ? [ Yes ] These are specified in the533 Appendix.534 4 . If you are using existing assets ( e.g. , code , data , models ) or curating/releasing new assets ... 535 ( a ) If your work uses existing assets , did you cite the creators ? [ Yes ] We use a re-partition536 of the Places2 Dataset which is cited.537 ( b ) Did you mention the license of the assets ? [ No ] Given that to our knowledge the538 Places2 dataset is widely known and free to use.539 ( c ) Did you include any new assets either in the supplemental material or as a URL ? [ No ] 540 As everything in the Supplementary Material/URL has been created/derived by us.541 ( d ) Did you discuss whether and how consent was obtained from people whose data you ’ re542 using/curating ? [ N/A ] We did not run any experiments with humans.543 ( e ) Did you discuss whether the data you are using/curating contains personally identifiable544 information or offensive content ? [ N/A ] We did not run any experiments with humans,545 and the scene classes we used were all publicly known and non-offensive places : e.g.546 ocean.547 5 . If you used crowdsourcing or conducted research with human subjects ... 548 ( a ) Did you include the full text of instructions given to participants and screenshots , if549 applicable ? [ N/A ] No human subjects were used.550 ( b ) Did you describe any potential participant risks , with links to Institutional Review551 Board ( IRB ) approvals , if applicable ? [ N/A ] No human subjects were used.552 ( c ) Did you include the estimated hourly wage paid to participants and the total amount553 spent on participant compensation ? [ N/A ] No human subjects were used.554
In this paper, authors study the functional advantages of a foveal transform of visual inputs. It is nicely introduced with a very comprehensive review of the literature. The method introduces a 2 two stage model of the visual system, where the first stage corresponds to the (fixed and non adaptive) foveation stage and the second stage to the higher level processing, typically associated with the categorisation operated in the ventral stream of the visual pathway. This second stage will be implemented by existing CNN architectures (AlexNet and ResNet) which are re-learned on the transformed inputs. To control for the functional consequences of the foveated processing, the first stage can also be a single isotropic blurring of the image. Both alternatives are manipulated such that their distortion (as computed with a SSIM measure) are equally balanced, leading to 1 standard mapping and three proposal retinal transformations : Standard-NEt (unmatched) and Foveation-Net, Matched-Net and Ada-Gauss-Net (matched). Results show that for both perceptual systems which are foveated, "Foveation-Net has the highest i.i.d generalization while Ada-Gauss- Net has the greatest o.o.d generalization". Second result is that foveated processing allowed a better robustness to occlusions and third result is that such networks reproduce behavioural results of a Window Cue-Conflict. Last results propose to study that foveation introduces a focusing strategy, and keep high-french information on the fovea - which seem less striking results.
SP:bdda04b701b73b4e5e5fec405a1b1219fbee5de7
Shape or Texture: Understanding Discriminative Features in CNNs
1 INTRODUCTION . Convolutional neural networks ( CNNs ) have achieved unprecedented performance in various computer vision tasks , such as image classification ( Krizhevsky et al. , 2012 ; Simonyan & Zisserman , 2015 ; He et al. , 2016 ) , object detection ( Ren et al. , 2015 ; He et al. , 2017 ) and semantic segmentation ( Long et al. , 2015 ; Chen et al. , 2017 ; Islam et al. , 2017 ) . Despite their black box nature , various studies have shown that early layers in CNNs activate for low-level patterns , like edges and blobs , while deeper layers activate for more complex and high-level patterns ( Zeiler & Fergus , 2014 ; Springenberg et al. , 2014 ) . The intuition is that this hierarchical learning of latent representations allows CNNs to recognize complex object shapes to correctly classify images ( Kriegeskorte , 2015 ) . In contrast , recent works ( Brendel & Bethge , 2019 ; Hermann & Lampinen , 2020 ) have argued that CNNs trained on ImageNet ( IN ) ( Deng et al. , 2009 ) classify images mainly according to their texture , rather than object shape . These conflicting results have large implications for the field of computer vision as it suggests that CNNs trained for image classification might be making decisions based largely off spurious correlations rather than a full understanding of different object categories . One example of these spurious correlations is how the Inception CNN ( Szegedy et al. , 2015 ) recognizes the difference between ‘ Wolf ’ and ‘ Husky ’ , based on whether there is snow in the background ( Tulio Ribeiro et al. , 2016 ) . Recognizing object shapes is important for the generalization to out-of-domain examples ( e.g. , few-shot learning ) , as shape is more discriminative than texture when texture-affecting phenomena arise , such as lighting , shading , weather , motion blur , or when switching between synthetic and real data . In addition to performance , identifying the discriminative features that CNNs use for decision making is critical for the transparency and further improvements of computer vision models . While the model may achieve good performance for a certain task , it can not communicate to the user about the reasons it made certain predictions . In other words , successful mod- els need to be good , and interpretable ( Lipton , 2019 ) . This is crucial for many domains where causal mechanisms should play a significant role in short or long-term decision making such as healthcare ( e.g. , what in the MRI indicates a patient has cancer ? ) . Additionally , if researchers intend for their algorithms to be deployed , there must be a certain degree of trust in the decision making algorithm . One downside of the increasing abstraction capabilities of deep CNNs is the lack of interpretability of the latent representations due to hidden layer activations coding semantic concepts in a distributed fashion ( Fong & Vedaldi , 2018 ) . It has therefore been difficult to precisely quantify the type of information contained in the latent representations of CNNs . Some methods have looked at ways to analyze the latent representations of CNNs on a neuron-to-neuron level . For instance , ( Bau et al. , 2017 ) quantify the number of interpretable neurons for a CNN by evaluating the semantic segmentation performance of an individual neuron from an upsampled latent representation . Later work ( Fong & Vedaldi , 2018 ) then removed the assumption that each neuron encodes a single semantic concept . These works successfully quantify the number of filters that recognize textures or specific objects in a CNN , but do not identify shape information within these representations . The most similar works to ours are those that aim to directly quantify the shape information in CNNs . For example , ( Geirhos et al. , 2018 ) analyzed the outputs of CNNs on images with conflicting shape and texture cues . By using image stylization ( Huang & Belongie , 2017 ) , they generated the Stylized ImageNet dataset ( SIN ) , where each image has an associated shape and texture label . They then measured the ‘ shape bias ’ and ‘ texture bias ’ of a CNN by calculating the percentage of images a CNN predicts as either the shape or texture label , respectively . They conclude that CNNs are ‘ texture biased ’ and make predictions mainly from texture in an image . This metric has been used in subsequent work exploring shape and texture bias in CNNs ( Hermann & Kornblith , 2019 ) ; however , the method only compares the output of a CNN , and fails to robustly quantify the amount of shape information contained in the latent representations ( note that they refer to ‘ shape ’ as the entire 3D form of an object , including contours that are not part of the silhouette , while in our work , we define ‘ shape ’ as the 2D class-agnostic silhouette of an object ) . Thus , the method from ( Hermann & Kornblith , 2019 ) can not answer a question of focus in our paper : ‘ What fraction of the object ’ s shape is actually encoded in the latent representation ? ’ . Further , as their metric for shape relies solely on the semantic class label , it precludes them from evaluating the encoded shape and associated categorical information on a per-pixel level . For instance , we show in Fig . 1 that shape biased models ( i.e. , trained on stylized images ) do not classify images based on the entire object shape : even though the CNN correctly classifies the image as a bird , only the partial binary mask ( i.e. , ‘ shape ’ ) can be extracted from the latent representations and it can not attribute the correct class label to the entire object region ( i.e. , semantic segmentation mask ) . Contributions . To address these issues , we perform an empirical study on the ability of CNNs to encode shape information on a neuron-to-neuron and per-pixel level . To quantify these two aspects , we first approximate the mutual information of latent representations between pairs of semantically related images which allows us to estimate the number of dimensions in the feature space dedicated to encoding shape and texture . We then propose a simple strategy to evaluate the amount of shape information contained in the internal representations of a CNN , on a per-pixel level . The latter technique is utilized to distinguish the quality of different shape encodings , regardless of the number of neurons used in each encoding . After showing the efficacy of the two methods , we reveal a number of meaningful properties of CNNs with respect to their ability to encode shape information , including the following : ( i ) Biasing a CNN towards shape predominantly changes the number of shape encoding neurons in the last feature encoding stage . ( ii ) When a CNN is trained on ImageNet , the majority of shape information is learned during the first few epochs . ( iii ) A significant amount of shape is encoded in the early layers of CNNs , which can be utilized to extract additional shape information from the network , by combining with shape encodings from deeper layers . ( iv ) Encoding the shape and class of an object does not imply the useful encoding of localized per-pixel categorical information . All code will be released to reproduce data and results . 2 DO CNNS SPEND MORE LEARNING CAPACITY ON SHAPE OR TEXTURE ? . With the goal of revealing the characteristics of where , when , and how much shape information is encoded in CNNs , we first aim to quantify the number of dimensions which encode shape in a CNN ’ s latent representation . This analysis on the latent representations will allow us to determine where the network spends learning capacity on shape , while other methods that focus solely on the network outputs have difficulty measuring the difference in shape information between convolutional layers . 2.1 ESTIMATING SHAPE AND TEXTURE DIMENSIONALITY . Previous works ( Bau et al. , 2017 ; Esser et al. , 2020 ) proposed various mechanisms to reveal the semantic concepts encoded in latent representations of CNNs . To quantify the amount of texture and shape information , we follow the approach of ( Esser et al. , 2020 ) , where the number of neurons that represent a certain semantic concept is estimated . Given a pretrained CNN encoder , E ( I ) = z , where z is a latent representation , we aim to estimate the dimensionality of the semantic concepts shape and texture within z . The main idea is that the mutual information between image pairs , Ia and Ib , which are similar in a semantic concept , will be preserved in a neuron zi only if the neuron encodes that specific semantic concept . Hence , the mutual information between the corresponding neuron pairs , zai = E ( I a ) and zbi = E ( I b ) , can be used to quantify the degree to which a semantic concept is represented by the neuron . A simple and efficient estimate for their mutual information MI ( zai , z b i ) can be obtained based on the correlation coefficient ρi . Indeed , under the assumption that the marginal distribution of the neuron zi is Gaussian , the correlation coefficient ρi provides a lower bound on the true mutual information through the following relationship which becomes tight for jointly Gaussian zai , z b i ( Kraskov et al. , 2004 ; Foster & Grassberger , 2011 ) . MI ( zai , z b i ) ≥ − 1 2 log ( 1− ρ2i ) , where ρi = Cov ( zai , z b i ) √ Var ( zai ) Var ( z b i ) . ( 1 ) To quantify how well a concept k is represented in terms of the number of neurons |zk| that encode the concept , we compute a score for each concept and the relative number of neurons is determined with a softmax over these scores and a baseline score . The latter is given by the number of neurons |z| , and shape and texture scores are given by the sum of their respective correlation coefficients ρshapei and ρ texture i , which are computed according to Eq . 1 with statistics taken over image pairs that are similar in shape and texture , respectively . Note that k ∈ { 1 , 2 } in our case , and the remaining dimensions not captured in any of the two semantic factors are allocated to the residual semantic factor , which by definition captures all other variability in the latent representation , z. Stylized PASCAL VOC 2012 Dataset . Our goal is to estimate the dimensionality of two semantic concepts : ( i ) shape and ( ii ) texture , and analyze pixel-wise shape information . Therefore we must generate a dataset that we can sample image pairs which share the semantic factors shape or texture , and have per-pixel object annotations . To accomplish this goal , we create the Stylized PASCAL VOC 2012 ( SVOC ) dataset . Similar to SIN , we use the AdaIN style transfer algorithm ( Huang & Belongie , 2017 ) to generate stylized images from the PASCAL VOC 2012 dataset ( Everingham et al. , 2010 ) with the same settings and hyperparameters as in the original paper ( Huang & Belongie , 2017 ) . We choose five random textures from the Describable Textures Dataset ( Cimpoi et al. , 2014 ) as the styles and we stylize every PASCAL VOC image with all five of these textures . For a fair comparison with models trained on ImageNet variants , we take only the images from PASCAL VOC which contain a single object . With the SVOC dataset , we can now sample image pairs which are similar in texture , by using two images from different categories but stylized with the same texture ( Fig . 2 ( A ) left ) , or shape , by using the same image stylized with two different textures ( Fig . 2 ( A ) right ) .
Texture vs shape sensitivity is a basic and important question in understanding how deep convolutional nets work. This paper investigates this question by asking how CNNs represent shape versus texture information internally. Using a (texture-vs-shape) stylized imagenet pioneered by Geirhos (2018), the paper applies a dimensionality estimation method of Esser (2020) and a segmentation-readout method for quantifying and visualizing the encoding of shape information in networks. Several different network architectures and layers are compared; and results are compared to baselines from previous papers as well as natural lower- and upper-bounds. The presentation is clear.
SP:04982111c9f052a633c1cacb113263af408e6a24
Shape or Texture: Understanding Discriminative Features in CNNs
1 INTRODUCTION . Convolutional neural networks ( CNNs ) have achieved unprecedented performance in various computer vision tasks , such as image classification ( Krizhevsky et al. , 2012 ; Simonyan & Zisserman , 2015 ; He et al. , 2016 ) , object detection ( Ren et al. , 2015 ; He et al. , 2017 ) and semantic segmentation ( Long et al. , 2015 ; Chen et al. , 2017 ; Islam et al. , 2017 ) . Despite their black box nature , various studies have shown that early layers in CNNs activate for low-level patterns , like edges and blobs , while deeper layers activate for more complex and high-level patterns ( Zeiler & Fergus , 2014 ; Springenberg et al. , 2014 ) . The intuition is that this hierarchical learning of latent representations allows CNNs to recognize complex object shapes to correctly classify images ( Kriegeskorte , 2015 ) . In contrast , recent works ( Brendel & Bethge , 2019 ; Hermann & Lampinen , 2020 ) have argued that CNNs trained on ImageNet ( IN ) ( Deng et al. , 2009 ) classify images mainly according to their texture , rather than object shape . These conflicting results have large implications for the field of computer vision as it suggests that CNNs trained for image classification might be making decisions based largely off spurious correlations rather than a full understanding of different object categories . One example of these spurious correlations is how the Inception CNN ( Szegedy et al. , 2015 ) recognizes the difference between ‘ Wolf ’ and ‘ Husky ’ , based on whether there is snow in the background ( Tulio Ribeiro et al. , 2016 ) . Recognizing object shapes is important for the generalization to out-of-domain examples ( e.g. , few-shot learning ) , as shape is more discriminative than texture when texture-affecting phenomena arise , such as lighting , shading , weather , motion blur , or when switching between synthetic and real data . In addition to performance , identifying the discriminative features that CNNs use for decision making is critical for the transparency and further improvements of computer vision models . While the model may achieve good performance for a certain task , it can not communicate to the user about the reasons it made certain predictions . In other words , successful mod- els need to be good , and interpretable ( Lipton , 2019 ) . This is crucial for many domains where causal mechanisms should play a significant role in short or long-term decision making such as healthcare ( e.g. , what in the MRI indicates a patient has cancer ? ) . Additionally , if researchers intend for their algorithms to be deployed , there must be a certain degree of trust in the decision making algorithm . One downside of the increasing abstraction capabilities of deep CNNs is the lack of interpretability of the latent representations due to hidden layer activations coding semantic concepts in a distributed fashion ( Fong & Vedaldi , 2018 ) . It has therefore been difficult to precisely quantify the type of information contained in the latent representations of CNNs . Some methods have looked at ways to analyze the latent representations of CNNs on a neuron-to-neuron level . For instance , ( Bau et al. , 2017 ) quantify the number of interpretable neurons for a CNN by evaluating the semantic segmentation performance of an individual neuron from an upsampled latent representation . Later work ( Fong & Vedaldi , 2018 ) then removed the assumption that each neuron encodes a single semantic concept . These works successfully quantify the number of filters that recognize textures or specific objects in a CNN , but do not identify shape information within these representations . The most similar works to ours are those that aim to directly quantify the shape information in CNNs . For example , ( Geirhos et al. , 2018 ) analyzed the outputs of CNNs on images with conflicting shape and texture cues . By using image stylization ( Huang & Belongie , 2017 ) , they generated the Stylized ImageNet dataset ( SIN ) , where each image has an associated shape and texture label . They then measured the ‘ shape bias ’ and ‘ texture bias ’ of a CNN by calculating the percentage of images a CNN predicts as either the shape or texture label , respectively . They conclude that CNNs are ‘ texture biased ’ and make predictions mainly from texture in an image . This metric has been used in subsequent work exploring shape and texture bias in CNNs ( Hermann & Kornblith , 2019 ) ; however , the method only compares the output of a CNN , and fails to robustly quantify the amount of shape information contained in the latent representations ( note that they refer to ‘ shape ’ as the entire 3D form of an object , including contours that are not part of the silhouette , while in our work , we define ‘ shape ’ as the 2D class-agnostic silhouette of an object ) . Thus , the method from ( Hermann & Kornblith , 2019 ) can not answer a question of focus in our paper : ‘ What fraction of the object ’ s shape is actually encoded in the latent representation ? ’ . Further , as their metric for shape relies solely on the semantic class label , it precludes them from evaluating the encoded shape and associated categorical information on a per-pixel level . For instance , we show in Fig . 1 that shape biased models ( i.e. , trained on stylized images ) do not classify images based on the entire object shape : even though the CNN correctly classifies the image as a bird , only the partial binary mask ( i.e. , ‘ shape ’ ) can be extracted from the latent representations and it can not attribute the correct class label to the entire object region ( i.e. , semantic segmentation mask ) . Contributions . To address these issues , we perform an empirical study on the ability of CNNs to encode shape information on a neuron-to-neuron and per-pixel level . To quantify these two aspects , we first approximate the mutual information of latent representations between pairs of semantically related images which allows us to estimate the number of dimensions in the feature space dedicated to encoding shape and texture . We then propose a simple strategy to evaluate the amount of shape information contained in the internal representations of a CNN , on a per-pixel level . The latter technique is utilized to distinguish the quality of different shape encodings , regardless of the number of neurons used in each encoding . After showing the efficacy of the two methods , we reveal a number of meaningful properties of CNNs with respect to their ability to encode shape information , including the following : ( i ) Biasing a CNN towards shape predominantly changes the number of shape encoding neurons in the last feature encoding stage . ( ii ) When a CNN is trained on ImageNet , the majority of shape information is learned during the first few epochs . ( iii ) A significant amount of shape is encoded in the early layers of CNNs , which can be utilized to extract additional shape information from the network , by combining with shape encodings from deeper layers . ( iv ) Encoding the shape and class of an object does not imply the useful encoding of localized per-pixel categorical information . All code will be released to reproduce data and results . 2 DO CNNS SPEND MORE LEARNING CAPACITY ON SHAPE OR TEXTURE ? . With the goal of revealing the characteristics of where , when , and how much shape information is encoded in CNNs , we first aim to quantify the number of dimensions which encode shape in a CNN ’ s latent representation . This analysis on the latent representations will allow us to determine where the network spends learning capacity on shape , while other methods that focus solely on the network outputs have difficulty measuring the difference in shape information between convolutional layers . 2.1 ESTIMATING SHAPE AND TEXTURE DIMENSIONALITY . Previous works ( Bau et al. , 2017 ; Esser et al. , 2020 ) proposed various mechanisms to reveal the semantic concepts encoded in latent representations of CNNs . To quantify the amount of texture and shape information , we follow the approach of ( Esser et al. , 2020 ) , where the number of neurons that represent a certain semantic concept is estimated . Given a pretrained CNN encoder , E ( I ) = z , where z is a latent representation , we aim to estimate the dimensionality of the semantic concepts shape and texture within z . The main idea is that the mutual information between image pairs , Ia and Ib , which are similar in a semantic concept , will be preserved in a neuron zi only if the neuron encodes that specific semantic concept . Hence , the mutual information between the corresponding neuron pairs , zai = E ( I a ) and zbi = E ( I b ) , can be used to quantify the degree to which a semantic concept is represented by the neuron . A simple and efficient estimate for their mutual information MI ( zai , z b i ) can be obtained based on the correlation coefficient ρi . Indeed , under the assumption that the marginal distribution of the neuron zi is Gaussian , the correlation coefficient ρi provides a lower bound on the true mutual information through the following relationship which becomes tight for jointly Gaussian zai , z b i ( Kraskov et al. , 2004 ; Foster & Grassberger , 2011 ) . MI ( zai , z b i ) ≥ − 1 2 log ( 1− ρ2i ) , where ρi = Cov ( zai , z b i ) √ Var ( zai ) Var ( z b i ) . ( 1 ) To quantify how well a concept k is represented in terms of the number of neurons |zk| that encode the concept , we compute a score for each concept and the relative number of neurons is determined with a softmax over these scores and a baseline score . The latter is given by the number of neurons |z| , and shape and texture scores are given by the sum of their respective correlation coefficients ρshapei and ρ texture i , which are computed according to Eq . 1 with statistics taken over image pairs that are similar in shape and texture , respectively . Note that k ∈ { 1 , 2 } in our case , and the remaining dimensions not captured in any of the two semantic factors are allocated to the residual semantic factor , which by definition captures all other variability in the latent representation , z. Stylized PASCAL VOC 2012 Dataset . Our goal is to estimate the dimensionality of two semantic concepts : ( i ) shape and ( ii ) texture , and analyze pixel-wise shape information . Therefore we must generate a dataset that we can sample image pairs which share the semantic factors shape or texture , and have per-pixel object annotations . To accomplish this goal , we create the Stylized PASCAL VOC 2012 ( SVOC ) dataset . Similar to SIN , we use the AdaIN style transfer algorithm ( Huang & Belongie , 2017 ) to generate stylized images from the PASCAL VOC 2012 dataset ( Everingham et al. , 2010 ) with the same settings and hyperparameters as in the original paper ( Huang & Belongie , 2017 ) . We choose five random textures from the Describable Textures Dataset ( Cimpoi et al. , 2014 ) as the styles and we stylize every PASCAL VOC image with all five of these textures . For a fair comparison with models trained on ImageNet variants , we take only the images from PASCAL VOC which contain a single object . With the SVOC dataset , we can now sample image pairs which are similar in texture , by using two images from different categories but stylized with the same texture ( Fig . 2 ( A ) left ) , or shape , by using the same image stylized with two different textures ( Fig . 2 ( A ) right ) .
The method provides two measures for assessing the degree at which shape is represented in CNNs. The first measure attempts to asses, within a given representation layer, the dimentionality required to encoder shape information. The second, evaluates the per-pixel shape representation, by attempting to generate the input's segmentation mask from the given representation (activation of the image at that layer).
SP:04982111c9f052a633c1cacb113263af408e6a24
On the Theory of Implicit Deep Learning: Global Convergence with Implicit Layers
1 INTRODUCTION . A feedforward deep neural network consists of a stack of H layers , where H is the depth of the network . The value for the depthH is typically a hyperparameter and is chosen by network designers ( e.g. , ResNet-101 in He et al . 2016 ) . Each layer computes some transformation of the output of the previous layer . Surprisingly , several recent studies achieved results competitive with the state-ofthe-art performances by using the same transformation for each layer with weight tying ( Dabre & Fujita , 2019 ; Bai et al. , 2019b ; Dehghani et al. , 2019 ) . In general terms , the output of the l-th layer with weight tying can be written by z ( l ) = h ( z ( l−1 ) ; x , θ ) for l = 1 , 2 , . . . , H − 1 , ( 1 ) where x is the input to the neural network , z ( l ) is the output of the l-th layer ( with z ( 0 ) = x ) , θ represents the trainable parameters that are shared among different layers ( i.e. , weight tying ) , and z ( l−1 ) 7→ h ( z ( l−1 ) ; x , θ ) is some continuous function that transforms z ( l−1 ) given x and θ . With weight tying , the memory requirement does not increase as the depth H increases in the forward pass . However , the efficient backward pass to compute gradients for training the network usually requires to store the values of the intermediate layers . Accordingly , the overall computational requirement typically increases as the finite depth H increases even with weight tying . Instead of using a finite depth H , Bai et al . ( 2019a ) recently introduced the deep equilibrium model that is equivalent to running an infinitely deep feedforward network with weight tying . Instead of running the layer-by-layer computation in equation ( 1 ) , the deep equilibrium model uses rootfinding to directly compute a fixed point z∗ = liml→∞ z ( l ) , where the limit can be ensured to exist by a choice of h. We can train the deep equilibrium model with gradient-based optimization by analytically backpropagating through the fixed point using implicit differentiation ( e.g. , Griewank & Walther , 2008 ; Bell & Burke , 2008 ; Christianson , 1994 ) . With numerical experiments , Bai et al . ( 2019a ) showed that the deep equilibrium model can improve performance over previous state-ofthe-art models while significantly reducing memory consumption . Despite the remarkable performances of deep equilibrium models , our theoretical understanding of its properties is yet limited . Indeed , immense efforts are still underway to mathematically understand deep linear networks , which have finite values for the depth H without weight tying ( Saxe et al. , 2014 ; Kawaguchi , 2016 ; Hardt & Ma , 2017 ; Laurent & Brecht , 2018 ; Arora et al. , 2018 ; Bartlett et al. , 2019 ; Du & Hu , 2019 ; Arora et al. , 2019a ; Zou et al. , 2020b ) . In deep linear networks , the function h at each layer is linear in θ and linear in x ; i.e. , the map ( x , θ ) 7→ h ( z ( l−1 ) ; x , θ ) is bilinear . Despite this linearity , several key properties of deep learning are still present in deep linear networks . For example , the gradient dynamics is nonlinear and the objective function is nonconvex . Accordingly , understanding gradient dynamics of deep linear networks is considered to be a valuable step towards the mathematical understanding of deep neural networks ( Saxe et al. , 2014 ; Arora et al. , 2018 ; 2019a ) . In this paper , inspired by the previous studies of deep linear networks , we initiate a theoretical study of gradient dynamics of deep equilibrium linear models as a step towards theoretically understanding general deep equilibrium models . As we shall see in Section 2 , the function h at each layer is nonlinear in θ for deep equilibrium linear models , whereas it is linear for deep linear networks . This additional nonlinearity is essential to enforce the existence of the fixed point z∗ . The additional nonlinearity , the infinite depth , and weight tying are the three key proprieties of deep equilibrium linear models that are absent in deep linear networks . Because of these three differences , we can not rely on the previous proofs and results in the literature of deep linear networks . Furthermore , we analyze gradient dynamics , whereas Kawaguchi ( 2016 ) ; Hardt & Ma ( 2017 ) ; Laurent & Brecht ( 2018 ) studied the loss landscape of deep linear networks . We also consider a general class of loss functions for both regression and classification , whereas Saxe et al . ( 2014 ) ; Arora et al . ( 2018 ) ; Bartlett et al . ( 2019 ) ; Arora et al . ( 2019a ) ; Zou et al . ( 2020b ) analyzed gradient dynamics of deep linear networks in the setting of the square loss . Accordingly , we employ different approaches in our analysis and derive qualitatively and quantitatively different results when compared with previous studies . In Section 2 , we provide theoretical and numerical observations that further motivate us to study deep equilibrium linear models . In Section 3 , we mathematically prove convergence of gradient dynamics to global minima and the exact relationship between the gradient dynamics of deep equilibrium linear models and that of the adaptive trust region method . Section 5 gives a review of related literature , which strengthens the main motivation of this paper along with the above discussion ( in Section 1 ) . Finally , Section 6 presents concluding remarks on our results , the limitation of this study , and future research directions . 2 PRELIMINARIES . We begin by defining the notation . We are given a training dataset ( ( xi , yi ) ) ni=1 of n samples where xi ∈ X ⊆ Rmx and yi ∈ Y ⊆ Rmy are the i-th input and the i-th target output , respectively . We would like to learn a hypothesis ( or predictor ) from a parametric family H = { fθ : Rmx → Rmy | θ ∈ Θ } by minimizing the objective function L ( called the empirical loss ) over θ ∈ Θ : L ( θ ) = ∑n i=1 ` ( fθ ( xi ) , yi ) , where θ is the parameter vector and ` : Rmy × Y → R≥0 is the loss function that measures the difference between the prediction fθ ( xi ) and the target yi for each sample . For example , when the parametric family of interest is the class of linear models as H = { x 7→Wφ ( x ) |W ∈ Rmy×m } , the objective function L can be rewritten as : L0 ( W ) = n∑ i=1 ` ( Wφ ( xi ) , yi ) , ( 2 ) where the feature map φ is an arbitrary fixed function that is allowed to be nonlinear and is chosen by model designers to transforms an input x ∈ Rmx into the desired features φ ( x ) ∈ Rm . We use vec ( W ) ∈ Rmym to represent the standard vectorization of a matrix W ∈ Rmy×m . Instead of linear models , our interest in this paper lies on deep equilibrium models . The output z∗ of the last hidden layer of a deep equilibrium model is defined by z∗ = lim l→∞ z ( l ) = lim l→∞ h ( z ( l−1 ) ; x , θ ) = h ( z∗ ; x , θ ) , ( 3 ) where the last equality follows from the continuity of z 7→ h ( z ; x , θ ) ( i.e. , the limit commutes with the continuous function ) . Thus , z∗ can be computed by solving the equation z∗ = h ( z∗ ; x , θ ) without running the infinitely deep layer-by-layer computation . The gradients with respect to parameters are computed analytically via backpropagation through z∗ using implicit differentiation . 2.1 DEEP EQUILIBRIUM LINEAR MODELS . A deep equilibrium linear model is an instance of the family of deep equilibrium models and is defined by setting the function h at each layer as follows : h ( z ( l−1 ) ; x , θ ) = γσ ( A ) z ( l−1 ) + φ ( x ) , ( 4 ) where θ = ( A , B ) with two trainable parameter matrices A ∈ Rm×m and B ∈ Rmy×m . Along with a positive real number γ ∈ ( 0 , 1 ) , the nonlinear function σ is used to ensure the existence of the fixed point and is defined by σ ( A ) ij = exp ( Aij ) ∑m k=1 exp ( Akj ) . The class of deep equilibrium linear models is given by H = { x 7→ B ( liml→∞ z ( l ) ( x , A ) ) | A ∈ Rm×m , B ∈ Rmy×m } , where z ( l ) ( x , A ) = γσ ( A ) z ( l−1 ) + φ ( x ) . Therefore , the objective function for deep equilibrium linear models can be written as L ( A , B ) = n∑ i=1 ` ( B ( lim l→∞ z ( l ) ( xi , A ) ) , yi ) . ( 5 ) The outputs of deep equilibrium linear models fθ ( x ) = B ( liml→∞ z ( l ) ( x , A ) ) are nonlinear and non-multilinear in the optimization variable A . This is in contrast to linear models and deep linear networks . From the optimization viewpoint , linear modelsWφ ( x ) are called linear because they are linear in the optimization variables W . Deep linear networks W ( H ) W ( H−1 ) · · ·W ( 1 ) x are multilinear in the optimization variables ( W ( 1 ) , W ( 2 ) , . . . , W ( H ) ) ( this holds also when we replace x by φ ( x ) ) . This difference creates a challenge in the analysis of deep equilibrium linear models . Following previous works on gradient dynamics of different machine learning models ( Saxe et al. , 2014 ; Ji & Telgarsky , 2020 ) , we consider the process of learning deep equilibrium linear models via gradient flow : d dt At = − ∂L ∂A ( At , Bt ) , d dt Bt = − ∂L ∂B ( At , Bt ) , ∀t ≥ 0 , ( 6 ) where ( At , Bt ) represents the model parameters at time t with an arbitrary initialization ( A0 , B0 ) . Throughout this paper , a feature map φ and a real number γ ∈ ( 0 , 1 ) are given and arbitrary ( except in experimental observations ) and we omit their universal quantifiers for the purpose of brevity . 2.2 PRELIMINARY OBSERVATION FOR ADDITIONAL MOTIVATION . Our analysis is chiefly motivated as a step towards mathematically understanding general deep equilibrium models ( as discussed in Sections 1 and 5 ) . In addition to the main motivation , this section provides supplementary motivations through theoretical and numerical preliminary observations . In general deep equilibrium models , the limit , liml→∞ z ( l ) , is not ensured to exist ( see Appendix C ) . In this view , the class of deep equilibrium linear models is one instance where the limit is guaranteed to exist for any values of model parameters as stated in Proposition 1 : Proposition 1 . Given any ( x , A ) , the sequence ( z ( l ) ( x , A ) ) l in Euclidean space Rm converges . Proof . We use the nonlinearity σ to ensure the convergence in our proof in Appendix A.5 . Proposition 1 shows that we can indeed define the deep equilibrium linear model with liml→∞ z ( l ) = z∗ ( x , A ) . Therefore , understanding this model is a sensible starting point for theory of general deep equilibrium models . As our analysis has been mainly motivated for theory , it would be of additional value to discuss whether the model would also make sense in practice , at least potentially in the future . Consider an ( unknown ) underling data distribution P ( x , y ) = P ( y|x ) P ( x ) . Intuitively , if the mean of the P ( y|x ) is approximately given by a ( true unknown ) deep equilibrium linear model , then it would make sense to use the parametric family of deep equilibrium linear models to have the inductive bias in practice . To confirm this intuition , we conducted numerical simulations . To generate datasets , we first drew uniformly at random 200 input images for input data points xi from a standard image dataset — CIFAR-10 , CIFAR-100 or Kuzushiji-MNIST ( Krizhevsky & Hinton , 2009 ; Clanuwat et al. , 2019 ) . We then generated targets as yi = B∗ ( liml→∞ z ( l ) ( xi , A∗ ) ) + δi where δi i.i.d.∼ N ( 0 , 1 ) . Each entry of the true ( unknown ) matrices A∗ and B∗ was independently drawn from the standard normal distribution . For each dataset generated in this way , we used stochastic gradient descent ( SGD ) to train linear models , fully-connected feedforward deep neural networks with ReLU nonlinearity ( DNNs ) , and deep equilibrium linear models . For all models , we fixed φ ( x ) = x . See Appendix D for more details of the experimental settings . The results of this numerical test are presented in Figure 1 . In the figure , the plotted lines indicate the mean values over five random trials whereas the shaded regions represent error bars with one standard deviation . The plots for linear models and deep equilibrium linear models are shown with the best and worst learning rates ( separately for each model in terms of the final test errors at epoch = 5000 ) from the set of learning rates SLR = { 0.01 , 0.005 , 0.001 , 0.0005 , 0.0001 , 0.00005 } . The plots for DNNs are shown with the best learning rates ( separately for each depthH ) from the set SLR . As can be seen , all models preformed approximately the same at initial points , but deep equilibrium linear models outperformed both linear models and DNNs in test errors after training , confirming our intuition above . Moreover , we confirmed qualitatively same behaviors with four more datasets as well as for DNNs with and without bias terms in Appendix D. These observations additionally motivated us to study deep equilibrium linear models to obtain our main results in the next section . The purpose of these experiments is to provide a secondary motivation for our theoretical analyses .
This submission studies the dynamics and convergence properties of "deep equilibrium models", which are parametric fixed-point iterations corresponding to the infinite depth limit of "weight-tied" neural networks. As the authors point out, these networks differ from deep linear networks and networks in the NTK scaling in that the optimization remains nonlinear w/r/t the parameters. The authors prove two results: first, they establish linear convergence to the global minimum under the relatively strict assumption of a "local" PL-inequality; secondly, they show that the dynamics of the deep equilibrium models differs from gradient descent dynamics and, in fact, is related to a trust region Newton method.
SP:894dddca0a75e8ac6e32583238fa19efce663601
On the Theory of Implicit Deep Learning: Global Convergence with Implicit Layers
1 INTRODUCTION . A feedforward deep neural network consists of a stack of H layers , where H is the depth of the network . The value for the depthH is typically a hyperparameter and is chosen by network designers ( e.g. , ResNet-101 in He et al . 2016 ) . Each layer computes some transformation of the output of the previous layer . Surprisingly , several recent studies achieved results competitive with the state-ofthe-art performances by using the same transformation for each layer with weight tying ( Dabre & Fujita , 2019 ; Bai et al. , 2019b ; Dehghani et al. , 2019 ) . In general terms , the output of the l-th layer with weight tying can be written by z ( l ) = h ( z ( l−1 ) ; x , θ ) for l = 1 , 2 , . . . , H − 1 , ( 1 ) where x is the input to the neural network , z ( l ) is the output of the l-th layer ( with z ( 0 ) = x ) , θ represents the trainable parameters that are shared among different layers ( i.e. , weight tying ) , and z ( l−1 ) 7→ h ( z ( l−1 ) ; x , θ ) is some continuous function that transforms z ( l−1 ) given x and θ . With weight tying , the memory requirement does not increase as the depth H increases in the forward pass . However , the efficient backward pass to compute gradients for training the network usually requires to store the values of the intermediate layers . Accordingly , the overall computational requirement typically increases as the finite depth H increases even with weight tying . Instead of using a finite depth H , Bai et al . ( 2019a ) recently introduced the deep equilibrium model that is equivalent to running an infinitely deep feedforward network with weight tying . Instead of running the layer-by-layer computation in equation ( 1 ) , the deep equilibrium model uses rootfinding to directly compute a fixed point z∗ = liml→∞ z ( l ) , where the limit can be ensured to exist by a choice of h. We can train the deep equilibrium model with gradient-based optimization by analytically backpropagating through the fixed point using implicit differentiation ( e.g. , Griewank & Walther , 2008 ; Bell & Burke , 2008 ; Christianson , 1994 ) . With numerical experiments , Bai et al . ( 2019a ) showed that the deep equilibrium model can improve performance over previous state-ofthe-art models while significantly reducing memory consumption . Despite the remarkable performances of deep equilibrium models , our theoretical understanding of its properties is yet limited . Indeed , immense efforts are still underway to mathematically understand deep linear networks , which have finite values for the depth H without weight tying ( Saxe et al. , 2014 ; Kawaguchi , 2016 ; Hardt & Ma , 2017 ; Laurent & Brecht , 2018 ; Arora et al. , 2018 ; Bartlett et al. , 2019 ; Du & Hu , 2019 ; Arora et al. , 2019a ; Zou et al. , 2020b ) . In deep linear networks , the function h at each layer is linear in θ and linear in x ; i.e. , the map ( x , θ ) 7→ h ( z ( l−1 ) ; x , θ ) is bilinear . Despite this linearity , several key properties of deep learning are still present in deep linear networks . For example , the gradient dynamics is nonlinear and the objective function is nonconvex . Accordingly , understanding gradient dynamics of deep linear networks is considered to be a valuable step towards the mathematical understanding of deep neural networks ( Saxe et al. , 2014 ; Arora et al. , 2018 ; 2019a ) . In this paper , inspired by the previous studies of deep linear networks , we initiate a theoretical study of gradient dynamics of deep equilibrium linear models as a step towards theoretically understanding general deep equilibrium models . As we shall see in Section 2 , the function h at each layer is nonlinear in θ for deep equilibrium linear models , whereas it is linear for deep linear networks . This additional nonlinearity is essential to enforce the existence of the fixed point z∗ . The additional nonlinearity , the infinite depth , and weight tying are the three key proprieties of deep equilibrium linear models that are absent in deep linear networks . Because of these three differences , we can not rely on the previous proofs and results in the literature of deep linear networks . Furthermore , we analyze gradient dynamics , whereas Kawaguchi ( 2016 ) ; Hardt & Ma ( 2017 ) ; Laurent & Brecht ( 2018 ) studied the loss landscape of deep linear networks . We also consider a general class of loss functions for both regression and classification , whereas Saxe et al . ( 2014 ) ; Arora et al . ( 2018 ) ; Bartlett et al . ( 2019 ) ; Arora et al . ( 2019a ) ; Zou et al . ( 2020b ) analyzed gradient dynamics of deep linear networks in the setting of the square loss . Accordingly , we employ different approaches in our analysis and derive qualitatively and quantitatively different results when compared with previous studies . In Section 2 , we provide theoretical and numerical observations that further motivate us to study deep equilibrium linear models . In Section 3 , we mathematically prove convergence of gradient dynamics to global minima and the exact relationship between the gradient dynamics of deep equilibrium linear models and that of the adaptive trust region method . Section 5 gives a review of related literature , which strengthens the main motivation of this paper along with the above discussion ( in Section 1 ) . Finally , Section 6 presents concluding remarks on our results , the limitation of this study , and future research directions . 2 PRELIMINARIES . We begin by defining the notation . We are given a training dataset ( ( xi , yi ) ) ni=1 of n samples where xi ∈ X ⊆ Rmx and yi ∈ Y ⊆ Rmy are the i-th input and the i-th target output , respectively . We would like to learn a hypothesis ( or predictor ) from a parametric family H = { fθ : Rmx → Rmy | θ ∈ Θ } by minimizing the objective function L ( called the empirical loss ) over θ ∈ Θ : L ( θ ) = ∑n i=1 ` ( fθ ( xi ) , yi ) , where θ is the parameter vector and ` : Rmy × Y → R≥0 is the loss function that measures the difference between the prediction fθ ( xi ) and the target yi for each sample . For example , when the parametric family of interest is the class of linear models as H = { x 7→Wφ ( x ) |W ∈ Rmy×m } , the objective function L can be rewritten as : L0 ( W ) = n∑ i=1 ` ( Wφ ( xi ) , yi ) , ( 2 ) where the feature map φ is an arbitrary fixed function that is allowed to be nonlinear and is chosen by model designers to transforms an input x ∈ Rmx into the desired features φ ( x ) ∈ Rm . We use vec ( W ) ∈ Rmym to represent the standard vectorization of a matrix W ∈ Rmy×m . Instead of linear models , our interest in this paper lies on deep equilibrium models . The output z∗ of the last hidden layer of a deep equilibrium model is defined by z∗ = lim l→∞ z ( l ) = lim l→∞ h ( z ( l−1 ) ; x , θ ) = h ( z∗ ; x , θ ) , ( 3 ) where the last equality follows from the continuity of z 7→ h ( z ; x , θ ) ( i.e. , the limit commutes with the continuous function ) . Thus , z∗ can be computed by solving the equation z∗ = h ( z∗ ; x , θ ) without running the infinitely deep layer-by-layer computation . The gradients with respect to parameters are computed analytically via backpropagation through z∗ using implicit differentiation . 2.1 DEEP EQUILIBRIUM LINEAR MODELS . A deep equilibrium linear model is an instance of the family of deep equilibrium models and is defined by setting the function h at each layer as follows : h ( z ( l−1 ) ; x , θ ) = γσ ( A ) z ( l−1 ) + φ ( x ) , ( 4 ) where θ = ( A , B ) with two trainable parameter matrices A ∈ Rm×m and B ∈ Rmy×m . Along with a positive real number γ ∈ ( 0 , 1 ) , the nonlinear function σ is used to ensure the existence of the fixed point and is defined by σ ( A ) ij = exp ( Aij ) ∑m k=1 exp ( Akj ) . The class of deep equilibrium linear models is given by H = { x 7→ B ( liml→∞ z ( l ) ( x , A ) ) | A ∈ Rm×m , B ∈ Rmy×m } , where z ( l ) ( x , A ) = γσ ( A ) z ( l−1 ) + φ ( x ) . Therefore , the objective function for deep equilibrium linear models can be written as L ( A , B ) = n∑ i=1 ` ( B ( lim l→∞ z ( l ) ( xi , A ) ) , yi ) . ( 5 ) The outputs of deep equilibrium linear models fθ ( x ) = B ( liml→∞ z ( l ) ( x , A ) ) are nonlinear and non-multilinear in the optimization variable A . This is in contrast to linear models and deep linear networks . From the optimization viewpoint , linear modelsWφ ( x ) are called linear because they are linear in the optimization variables W . Deep linear networks W ( H ) W ( H−1 ) · · ·W ( 1 ) x are multilinear in the optimization variables ( W ( 1 ) , W ( 2 ) , . . . , W ( H ) ) ( this holds also when we replace x by φ ( x ) ) . This difference creates a challenge in the analysis of deep equilibrium linear models . Following previous works on gradient dynamics of different machine learning models ( Saxe et al. , 2014 ; Ji & Telgarsky , 2020 ) , we consider the process of learning deep equilibrium linear models via gradient flow : d dt At = − ∂L ∂A ( At , Bt ) , d dt Bt = − ∂L ∂B ( At , Bt ) , ∀t ≥ 0 , ( 6 ) where ( At , Bt ) represents the model parameters at time t with an arbitrary initialization ( A0 , B0 ) . Throughout this paper , a feature map φ and a real number γ ∈ ( 0 , 1 ) are given and arbitrary ( except in experimental observations ) and we omit their universal quantifiers for the purpose of brevity . 2.2 PRELIMINARY OBSERVATION FOR ADDITIONAL MOTIVATION . Our analysis is chiefly motivated as a step towards mathematically understanding general deep equilibrium models ( as discussed in Sections 1 and 5 ) . In addition to the main motivation , this section provides supplementary motivations through theoretical and numerical preliminary observations . In general deep equilibrium models , the limit , liml→∞ z ( l ) , is not ensured to exist ( see Appendix C ) . In this view , the class of deep equilibrium linear models is one instance where the limit is guaranteed to exist for any values of model parameters as stated in Proposition 1 : Proposition 1 . Given any ( x , A ) , the sequence ( z ( l ) ( x , A ) ) l in Euclidean space Rm converges . Proof . We use the nonlinearity σ to ensure the convergence in our proof in Appendix A.5 . Proposition 1 shows that we can indeed define the deep equilibrium linear model with liml→∞ z ( l ) = z∗ ( x , A ) . Therefore , understanding this model is a sensible starting point for theory of general deep equilibrium models . As our analysis has been mainly motivated for theory , it would be of additional value to discuss whether the model would also make sense in practice , at least potentially in the future . Consider an ( unknown ) underling data distribution P ( x , y ) = P ( y|x ) P ( x ) . Intuitively , if the mean of the P ( y|x ) is approximately given by a ( true unknown ) deep equilibrium linear model , then it would make sense to use the parametric family of deep equilibrium linear models to have the inductive bias in practice . To confirm this intuition , we conducted numerical simulations . To generate datasets , we first drew uniformly at random 200 input images for input data points xi from a standard image dataset — CIFAR-10 , CIFAR-100 or Kuzushiji-MNIST ( Krizhevsky & Hinton , 2009 ; Clanuwat et al. , 2019 ) . We then generated targets as yi = B∗ ( liml→∞ z ( l ) ( xi , A∗ ) ) + δi where δi i.i.d.∼ N ( 0 , 1 ) . Each entry of the true ( unknown ) matrices A∗ and B∗ was independently drawn from the standard normal distribution . For each dataset generated in this way , we used stochastic gradient descent ( SGD ) to train linear models , fully-connected feedforward deep neural networks with ReLU nonlinearity ( DNNs ) , and deep equilibrium linear models . For all models , we fixed φ ( x ) = x . See Appendix D for more details of the experimental settings . The results of this numerical test are presented in Figure 1 . In the figure , the plotted lines indicate the mean values over five random trials whereas the shaded regions represent error bars with one standard deviation . The plots for linear models and deep equilibrium linear models are shown with the best and worst learning rates ( separately for each model in terms of the final test errors at epoch = 5000 ) from the set of learning rates SLR = { 0.01 , 0.005 , 0.001 , 0.0005 , 0.0001 , 0.00005 } . The plots for DNNs are shown with the best learning rates ( separately for each depthH ) from the set SLR . As can be seen , all models preformed approximately the same at initial points , but deep equilibrium linear models outperformed both linear models and DNNs in test errors after training , confirming our intuition above . Moreover , we confirmed qualitatively same behaviors with four more datasets as well as for DNNs with and without bias terms in Appendix D. These observations additionally motivated us to study deep equilibrium linear models to obtain our main results in the next section . The purpose of these experiments is to provide a secondary motivation for our theoretical analyses .
The paper discusses the theory of deep equilibrium models with linear activations. The model weights are softmaxed to ensure that inference converges to a fixed point, a necessary condition for training deep equilibrium models. The paper then analyzes the gradient flow dynamics of such models. The main result is that linear-rate convergence is guaranteed for a class of loss functions, including quadratic and logistic losses, when training with gradient flow. This conclusion is supported by experiments conducted in a teacher-student-like setup, where the labels are generated by a teacher deep equilibrium model, showing that training does converge in practice.
SP:894dddca0a75e8ac6e32583238fa19efce663601
Symmetry-Aware Actor-Critic for 3D Molecular Design
Automating molecular design using deep reinforcement learning ( RL ) has the potential to greatly accelerate the search for novel materials . Despite recent progress on leveraging graph representations to design molecules , such methods are fundamentally limited by the lack of three-dimensional ( 3D ) information . In light of this , we propose a novel actor-critic architecture for 3D molecular design that can generate molecular structures unattainable with previous approaches . This is achieved by exploiting the symmetries of the design process through a rotationally covariant state-action representation based on a spherical harmonics series expansion . We demonstrate the benefits of our approach on several 3D molecular design tasks , where we find that building in such symmetries significantly improves generalization and the quality of generated molecules . 1 INTRODUCTION . The search for molecular structures with desirable properties is a challenging task with important applications in de novo drug design and materials discovery ( Schneider et al. , 2019 ) . There exist a plethora of machine learning approaches to accelerate this search , including generative models based on variational autoencoders ( VAEs ) ( Gómez-Bombarelli et al. , 2018 ) , recurrent neural networks ( RNNs ) ( Segler et al. , 2018 ) , and generative adversarial networks ( GANs ) ( De Cao & Kipf , 2018 ) . However , the reliance on a sufficiently large dataset for exploring unknown regions of chemical space is a severe limitation of such supervised models . Recent RL-based methods ( e.g. , Olivecrona et al . ( 2017 ) , Jørgensen et al . ( 2019 ) , Simm et al . ( 2020 ) ) mitigate the need for an existing dataset of molecules as they only require access to a reward function . Most approaches rely on graph representations of molecules , where atoms and bonds are represented by nodes and edges , respectively . This is a strongly simplified model designed for the description of single organic molecules . It is unsuitable for encoding metals and molecular clusters as it lacks information about the relative position of atoms in 3D space . Further , geometric constraints on the design process can not be included , e.g . those given by the active site of an enzyme . A more general representation closer to the physical system is one in which a molecule is described by its atoms ’ positions in Cartesian coordinates . However , it would be very inefficient to naively learn a model based on this representation . That is because molecular properties such as the energy are invariant ( i.e . unchanged ) under symmetry operations like translation or rotation of all atomic positions . A model without the right inductive bias would thus have to learn those symmetries from scratch . In this work , we develop a novel RL approach for designing molecules in Cartesian coordinates that explicitly encodes these symmetry operations . The agent builds molecules by consecutively placing atoms such that if the generated structure is rotated or translated , the agent ’ s action is rotated and translated accordingly ; this way , the reward remains the same ( see Fig . 1 ( a ) ) . We achieve this through a rotationally covariant state representation based on spherical harmonics , which we integrate into a novel actor-critic network architecture with an auto-regressive policy that maintains the desired covariance . Building in this inductive bias enables us to generate molecular structures with more complex coordination geometry than the class of molecules that were attainable with previous approaches . Finally , we perform experiments on several 3D molecular design tasks , where we find that our approach significantly improves the generalization capabilities of the RL agent and the quality of the generated molecules . In summary , our contributions are as follows : • we propose the first approach for 3D molecular design that exploits symmetries of the design process by leveraging a rotationally covariant state representation ; • we integrate this state representation into an actor-critic neural network architecture with a rotationally covariant auto-regressive policy , where the orientation of the atoms to be placed is modeled through a flexible distribution based on spherical harmonics ; • we demonstrate the benefits of our approach on several 3D molecular design tasks , including a newly proposed task that showcases the generalization capabilities of our agent . 2 BACKGROUND . 2.1 REINFORCEMENT LEARNING FOR MOLECULAR DESIGN . In the standard RL setting ( Sutton & Barto , 2018 ) , an agent interacts with the environment to maximize its reward . Formally , such an environment is described by a Markov decision process ( MDP ) M = ( S , A , T , µ0 , γ , r ) with states st ∈ S , actions at ∈ A , transition dynamics T : S × A 7→ S , initial state distribution µ0 , discount factor γ ∈ ( 0 , 1 ] , and reward function r : S × A 7→ R. The goal is to learn a stochastic policy π ( at|st ) that maximizes the expected discounted return J ( θ ) = Es0∼µ0 [ V π ( s0 ) ] , where the value function V π ( st ) = Eπ [ ∑T t′=t γ t′r ( st′ , at′ ) |st ] is defined as the expected discounted return when starting from state st and following policy π . Following Simm et al . ( 2020 ) , we design molecules by iteratively picking atoms from a bag and positioning them on a 3D canvas . Such a sequential decision-making problem is described by an MDP where the state st = ( Ct , Bt ) comprises both the canvas Ct and the bag Bt . The canvas Ct = C0∪ { ( ei , xi ) } t−1i=0 is a set of atoms with chemical element ei ∈ { H , C , N , O , . . . } and position xi ∈ R3 placed up to time t − 1 , where C0 can either be empty or contain a set of initially placed atoms . The number of atoms on the canvas is denoted by |Ct| . The bag Bt = { ( e , m ( e ) ) } is a multi-set of atoms yet to be placed , where m ( e ) is the multiplicity of the element e. Each action at = ( et , xt ) consists of the element et ∈ Bt and position xt ∈ R3 of the next atom to be added to the canvas . Placing an atom through action at in state st is modeled by a deterministic transition function T ( st , at ) that yields the next state st+1 = ( Ct+1 , Bt+1 ) with Bt+1 = Bt\et . The reward function r ( st , at ) = −∆E ( st , at ) is given by the negative energy difference between the resulting structure described by Ct+1 , and the sum of energies of the current structure Ct and a new atom of element et placed at the origin , i.e . ∆E ( st , at ) = E ( Ct+1 ) − [ E ( Ct ) + E ( { ( e,0 ) } ) ] . Intuitively , the reward encourages the agent to build stable , low-energy structures . We evaluate the energy using the fast semi-empirical Parametrized Method 6 ( PM6 ) ( Stewart , 2007 ) as implemented in SPARROW ( Husch et al. , 2018 ; Bosia et al. , 2020 ) ; see Appendix A for details . An example of a rollout is shown in Fig . 1 ( b ) . At the beginning of the episode , the agent observes the initial state ( C0 , B0 ) ∼ µ0 ( s0 ) , e.g . C0 = ∅ and B0 = SOF41 . The agent then iteratively constructs a molecule by placing atoms from the bag onto the canvas until the bag is empty.2 2.2 ROTATIONALLY COVARIANT NEURAL NETWORKS . A function f : X 7→ Y is invariant under a transformation operator Tg : X 7→ X if f ( Tg [ x ] ) = f ( x ) for all x ∈ X , g ∈ G , where G is a mathematical group . In contrast , f is covariant with respect to Tg if there exists an operator T ′g : Y 7→ Y such that f ( Tg [ x ] ) = T ′g [ f ( x ) ] . To achieve rotational covariance , it is natural to work with spherical harmonics . They are a set of complexvalued functions Y m ` : S2 7→ C with ` = 0 , 1 , 2 , . . . and m = − ` , − ` + 1 , . . . , ` − 1 , ` on the unit sphere S2 in R3 . The first few spherical harmonics are in Appendix B . They are defined by Y m ` ( ϑ , ϕ ) = ( −1 ) m √ 2 ` + 1 4π ( ` −m ) ! ( ` +m ) ! Pml ( cos ( ϑ ) ) e imϕ , ϕ ∈ [ 0 , 2π ] , ϑ ∈ [ 0 , π ] , ( 1 ) where Pm ` denotes the associated normalized Legendre polynomials of the first kind ( Bateman , 1953 ) , and each Y m ` is normalized such that ∫∫ |Y m ` ( ϑ , ϕ ) |2 sinϑdϑdϕ = 1 . Any square-integrable function f : S2 7→ C can be written as a series expansion in terms of the spherical harmonics , f ( x̃ ) = ∞∑ ` =0 ∑̀ m=− ` f̂m ` Y m ` ( x̃ ) , ( 2 ) where x̃ = ( ϑ , ϕ ) ∈ S2 . The complex-valued coefficients { f̂m ` } are the analogs of Fourier coefficients and are given by f̂m ` = ∫ f ( x̃ ) Y m∗ ` ( x̃ ) Ω ( dx̃ ) . Such a function f can be modeled by learning the coefficients { f̂m ` } using CORMORANT ( Anderson et al. , 2019 ) , a neural network architecture for predicting properties of chemical systems that works entirely in Fourier space . A key feature is that each neuron is covariant to rotation but invariant to translation ; further , each neuron explicitly corresponds to a subset of atoms in the molecule . The input of CORMORANT is a spherical function f0 : S2 7→ Cd and the output is a collection of vectors f̂ = { f̂0 , f̂1 , . . . , f̂L } , where each f̂ ` ∈ τ × ( 2 ` + 1 ) is a rotationally covariant vector with τ channels . That is , if the input is rotated by R ∈ SO ( 3 ) , then each f̂ ` transforms as f̂ ` 7→ D ` ( R ) f̂ ` , where D ` ( R ) : SO ( 3 ) 7→ C ( 2 ` +1 ) × ( 2 ` +1 ) are the irreducible representations of SO ( 3 ) , also called the Wigner D-matrices . 3 COVARIANT POLICY FOR MOLECULAR DESIGN . An efficient RL agent needs to exploit the symmetries of the molecular design process . Therefore , we require a policy π ( a|s ) with actions a = ( e , x ) that is covariant under translation and rotation with respect to the position x , i.e. , x should rotate ( or translate ) accordingly if the atoms on the canvas C are rotated ( or translated ) . In contrast , the policy needs to be invariant to the element e , i.e . the chosen element remains unchanged under such transformations ( see Fig . 1 ( a ) ) . Since learning such a policy is difficult when working directly in global Cartesian coordinates , we instead follow Simm et al . ( 2020 ) and use an action representation that is local with respect to an already placed focal atom . If the next atom is placed relative to the focal atom , covariance under translation of x is automatically achieved and only the rotational covariance remains to be dealt with . As shown in Fig . 2 , we model the action a through a sequence of sub-actions : ( 1 ) the index f ∈ { 1 , . . . , |C| } of the focal atom around which the next atom is placed,3 ( 2 ) the element e ∈ { 1 , . . . , Ne } of the next atom from the set of available elements , ( 3 ) a distance d ∈ R+ between the focal atom and the next atom , and ( 4 ) the orientation x̃ = ( ϑ , ϕ ) ∈ S2 of the atom on a unit sphere around the focal atom . Denoting xf as the position of the focal atom , we obtain action a = ( e , x ) by mapping the local coordinates ( x̃ , d , f ) to global coordinates x = xf + d · x̃ , where 1Shorthand for { ( S , 1 ) , ( O , 1 ) , ( F , 4 ) } . 2Hereafter , we drop the time index when it is clear from the context . 3If the canvas C0 is empty , the agent selects an element e0 ∈ B0 and places it at the origin , i.e . a0 = ( e0,0 ) . x is now covariant under translation and rotation . We choose these sub-actions using the following auto-regressive policy : π ( a|s ) = π ( x̃ , d , e , f |s ) = p ( x̃|d , e , f , s ) p ( d|e , f , s ) p ( e|f , s ) p ( f |s ) . ( 3 ) A novel actor-critic neural network architecture that implements this policy is illustrated in Fig . 3 . In the following , we discuss its state embedding , actor , and critic networks in more detail .
The paper proposes an actor-critic neural network architecture for autoregressive generation of 3D molecular structures with reinforcement learning (RL). It builds upon the RL approach by Simm et al. (2020) which makes use of internal coordinates in order to deal with the symmetries that occur when placing atoms in the molecular design process.
SP:32be26cc5561e9335adc2179a3c832258c2a346e
Symmetry-Aware Actor-Critic for 3D Molecular Design
Automating molecular design using deep reinforcement learning ( RL ) has the potential to greatly accelerate the search for novel materials . Despite recent progress on leveraging graph representations to design molecules , such methods are fundamentally limited by the lack of three-dimensional ( 3D ) information . In light of this , we propose a novel actor-critic architecture for 3D molecular design that can generate molecular structures unattainable with previous approaches . This is achieved by exploiting the symmetries of the design process through a rotationally covariant state-action representation based on a spherical harmonics series expansion . We demonstrate the benefits of our approach on several 3D molecular design tasks , where we find that building in such symmetries significantly improves generalization and the quality of generated molecules . 1 INTRODUCTION . The search for molecular structures with desirable properties is a challenging task with important applications in de novo drug design and materials discovery ( Schneider et al. , 2019 ) . There exist a plethora of machine learning approaches to accelerate this search , including generative models based on variational autoencoders ( VAEs ) ( Gómez-Bombarelli et al. , 2018 ) , recurrent neural networks ( RNNs ) ( Segler et al. , 2018 ) , and generative adversarial networks ( GANs ) ( De Cao & Kipf , 2018 ) . However , the reliance on a sufficiently large dataset for exploring unknown regions of chemical space is a severe limitation of such supervised models . Recent RL-based methods ( e.g. , Olivecrona et al . ( 2017 ) , Jørgensen et al . ( 2019 ) , Simm et al . ( 2020 ) ) mitigate the need for an existing dataset of molecules as they only require access to a reward function . Most approaches rely on graph representations of molecules , where atoms and bonds are represented by nodes and edges , respectively . This is a strongly simplified model designed for the description of single organic molecules . It is unsuitable for encoding metals and molecular clusters as it lacks information about the relative position of atoms in 3D space . Further , geometric constraints on the design process can not be included , e.g . those given by the active site of an enzyme . A more general representation closer to the physical system is one in which a molecule is described by its atoms ’ positions in Cartesian coordinates . However , it would be very inefficient to naively learn a model based on this representation . That is because molecular properties such as the energy are invariant ( i.e . unchanged ) under symmetry operations like translation or rotation of all atomic positions . A model without the right inductive bias would thus have to learn those symmetries from scratch . In this work , we develop a novel RL approach for designing molecules in Cartesian coordinates that explicitly encodes these symmetry operations . The agent builds molecules by consecutively placing atoms such that if the generated structure is rotated or translated , the agent ’ s action is rotated and translated accordingly ; this way , the reward remains the same ( see Fig . 1 ( a ) ) . We achieve this through a rotationally covariant state representation based on spherical harmonics , which we integrate into a novel actor-critic network architecture with an auto-regressive policy that maintains the desired covariance . Building in this inductive bias enables us to generate molecular structures with more complex coordination geometry than the class of molecules that were attainable with previous approaches . Finally , we perform experiments on several 3D molecular design tasks , where we find that our approach significantly improves the generalization capabilities of the RL agent and the quality of the generated molecules . In summary , our contributions are as follows : • we propose the first approach for 3D molecular design that exploits symmetries of the design process by leveraging a rotationally covariant state representation ; • we integrate this state representation into an actor-critic neural network architecture with a rotationally covariant auto-regressive policy , where the orientation of the atoms to be placed is modeled through a flexible distribution based on spherical harmonics ; • we demonstrate the benefits of our approach on several 3D molecular design tasks , including a newly proposed task that showcases the generalization capabilities of our agent . 2 BACKGROUND . 2.1 REINFORCEMENT LEARNING FOR MOLECULAR DESIGN . In the standard RL setting ( Sutton & Barto , 2018 ) , an agent interacts with the environment to maximize its reward . Formally , such an environment is described by a Markov decision process ( MDP ) M = ( S , A , T , µ0 , γ , r ) with states st ∈ S , actions at ∈ A , transition dynamics T : S × A 7→ S , initial state distribution µ0 , discount factor γ ∈ ( 0 , 1 ] , and reward function r : S × A 7→ R. The goal is to learn a stochastic policy π ( at|st ) that maximizes the expected discounted return J ( θ ) = Es0∼µ0 [ V π ( s0 ) ] , where the value function V π ( st ) = Eπ [ ∑T t′=t γ t′r ( st′ , at′ ) |st ] is defined as the expected discounted return when starting from state st and following policy π . Following Simm et al . ( 2020 ) , we design molecules by iteratively picking atoms from a bag and positioning them on a 3D canvas . Such a sequential decision-making problem is described by an MDP where the state st = ( Ct , Bt ) comprises both the canvas Ct and the bag Bt . The canvas Ct = C0∪ { ( ei , xi ) } t−1i=0 is a set of atoms with chemical element ei ∈ { H , C , N , O , . . . } and position xi ∈ R3 placed up to time t − 1 , where C0 can either be empty or contain a set of initially placed atoms . The number of atoms on the canvas is denoted by |Ct| . The bag Bt = { ( e , m ( e ) ) } is a multi-set of atoms yet to be placed , where m ( e ) is the multiplicity of the element e. Each action at = ( et , xt ) consists of the element et ∈ Bt and position xt ∈ R3 of the next atom to be added to the canvas . Placing an atom through action at in state st is modeled by a deterministic transition function T ( st , at ) that yields the next state st+1 = ( Ct+1 , Bt+1 ) with Bt+1 = Bt\et . The reward function r ( st , at ) = −∆E ( st , at ) is given by the negative energy difference between the resulting structure described by Ct+1 , and the sum of energies of the current structure Ct and a new atom of element et placed at the origin , i.e . ∆E ( st , at ) = E ( Ct+1 ) − [ E ( Ct ) + E ( { ( e,0 ) } ) ] . Intuitively , the reward encourages the agent to build stable , low-energy structures . We evaluate the energy using the fast semi-empirical Parametrized Method 6 ( PM6 ) ( Stewart , 2007 ) as implemented in SPARROW ( Husch et al. , 2018 ; Bosia et al. , 2020 ) ; see Appendix A for details . An example of a rollout is shown in Fig . 1 ( b ) . At the beginning of the episode , the agent observes the initial state ( C0 , B0 ) ∼ µ0 ( s0 ) , e.g . C0 = ∅ and B0 = SOF41 . The agent then iteratively constructs a molecule by placing atoms from the bag onto the canvas until the bag is empty.2 2.2 ROTATIONALLY COVARIANT NEURAL NETWORKS . A function f : X 7→ Y is invariant under a transformation operator Tg : X 7→ X if f ( Tg [ x ] ) = f ( x ) for all x ∈ X , g ∈ G , where G is a mathematical group . In contrast , f is covariant with respect to Tg if there exists an operator T ′g : Y 7→ Y such that f ( Tg [ x ] ) = T ′g [ f ( x ) ] . To achieve rotational covariance , it is natural to work with spherical harmonics . They are a set of complexvalued functions Y m ` : S2 7→ C with ` = 0 , 1 , 2 , . . . and m = − ` , − ` + 1 , . . . , ` − 1 , ` on the unit sphere S2 in R3 . The first few spherical harmonics are in Appendix B . They are defined by Y m ` ( ϑ , ϕ ) = ( −1 ) m √ 2 ` + 1 4π ( ` −m ) ! ( ` +m ) ! Pml ( cos ( ϑ ) ) e imϕ , ϕ ∈ [ 0 , 2π ] , ϑ ∈ [ 0 , π ] , ( 1 ) where Pm ` denotes the associated normalized Legendre polynomials of the first kind ( Bateman , 1953 ) , and each Y m ` is normalized such that ∫∫ |Y m ` ( ϑ , ϕ ) |2 sinϑdϑdϕ = 1 . Any square-integrable function f : S2 7→ C can be written as a series expansion in terms of the spherical harmonics , f ( x̃ ) = ∞∑ ` =0 ∑̀ m=− ` f̂m ` Y m ` ( x̃ ) , ( 2 ) where x̃ = ( ϑ , ϕ ) ∈ S2 . The complex-valued coefficients { f̂m ` } are the analogs of Fourier coefficients and are given by f̂m ` = ∫ f ( x̃ ) Y m∗ ` ( x̃ ) Ω ( dx̃ ) . Such a function f can be modeled by learning the coefficients { f̂m ` } using CORMORANT ( Anderson et al. , 2019 ) , a neural network architecture for predicting properties of chemical systems that works entirely in Fourier space . A key feature is that each neuron is covariant to rotation but invariant to translation ; further , each neuron explicitly corresponds to a subset of atoms in the molecule . The input of CORMORANT is a spherical function f0 : S2 7→ Cd and the output is a collection of vectors f̂ = { f̂0 , f̂1 , . . . , f̂L } , where each f̂ ` ∈ τ × ( 2 ` + 1 ) is a rotationally covariant vector with τ channels . That is , if the input is rotated by R ∈ SO ( 3 ) , then each f̂ ` transforms as f̂ ` 7→ D ` ( R ) f̂ ` , where D ` ( R ) : SO ( 3 ) 7→ C ( 2 ` +1 ) × ( 2 ` +1 ) are the irreducible representations of SO ( 3 ) , also called the Wigner D-matrices . 3 COVARIANT POLICY FOR MOLECULAR DESIGN . An efficient RL agent needs to exploit the symmetries of the molecular design process . Therefore , we require a policy π ( a|s ) with actions a = ( e , x ) that is covariant under translation and rotation with respect to the position x , i.e. , x should rotate ( or translate ) accordingly if the atoms on the canvas C are rotated ( or translated ) . In contrast , the policy needs to be invariant to the element e , i.e . the chosen element remains unchanged under such transformations ( see Fig . 1 ( a ) ) . Since learning such a policy is difficult when working directly in global Cartesian coordinates , we instead follow Simm et al . ( 2020 ) and use an action representation that is local with respect to an already placed focal atom . If the next atom is placed relative to the focal atom , covariance under translation of x is automatically achieved and only the rotational covariance remains to be dealt with . As shown in Fig . 2 , we model the action a through a sequence of sub-actions : ( 1 ) the index f ∈ { 1 , . . . , |C| } of the focal atom around which the next atom is placed,3 ( 2 ) the element e ∈ { 1 , . . . , Ne } of the next atom from the set of available elements , ( 3 ) a distance d ∈ R+ between the focal atom and the next atom , and ( 4 ) the orientation x̃ = ( ϑ , ϕ ) ∈ S2 of the atom on a unit sphere around the focal atom . Denoting xf as the position of the focal atom , we obtain action a = ( e , x ) by mapping the local coordinates ( x̃ , d , f ) to global coordinates x = xf + d · x̃ , where 1Shorthand for { ( S , 1 ) , ( O , 1 ) , ( F , 4 ) } . 2Hereafter , we drop the time index when it is clear from the context . 3If the canvas C0 is empty , the agent selects an element e0 ∈ B0 and places it at the origin , i.e . a0 = ( e0,0 ) . x is now covariant under translation and rotation . We choose these sub-actions using the following auto-regressive policy : π ( a|s ) = π ( x̃ , d , e , f |s ) = p ( x̃|d , e , f , s ) p ( d|e , f , s ) p ( e|f , s ) p ( f |s ) . ( 3 ) A novel actor-critic neural network architecture that implements this policy is illustrated in Fig . 3 . In the following , we discuss its state embedding , actor , and critic networks in more detail .
This work presents an approach for 3D molecular design using reinforcement learning that exploits rotational symmetries in molecular conformations. The formulation involves an MDP that selects atoms from a “bag” and positions them in 3D space. The reward function is based on PM6 energies to encourage the generation of low-energy structures. The manuscript is well-written and provides ample context and appropriate references.
SP:32be26cc5561e9335adc2179a3c832258c2a346e
Graph Autoencoders with Deconvolutional Networks
1 INTRODUCTION . Autoencoders have demonstrated excellent performance on tasks such as unsupervised representation learning ( Bengio , 2009 ) and de-noising ( Vincent et al. , 2010 ) . Recently , several studies ( Zeiler & Fergus , 2014 ; Long et al. , 2015 ) have demonstrated that the performance of autoencoders can be further improved by encoding with Convolutional Networks and decoding with Deconvolutional Networks ( Zeiler et al. , 2010 ) . Notably , Noh et al . ( 2015 ) present a novel symmetric architecture that provides a bottom-up mapping from input signals to latent hierarchical feature space with { convolution , pooling } operations and then maps the latent representation back to the input space with { deconvolution , unpooling } operations . While this architecture has been successful when processing features with structures existed in the Euclidean space ( e.g. , images ) , recently there has been a surging interest in applying such a framework on non-Euclidean data like graphs . However , extending this autoencoder framework to graph-structured data requires Graph Deconvolutional operations , which remains open-ended and hasn ’ t been well-studied as opposed to the large body of works that have already been proposed for Graph Convolutional Networks ( Defferrard et al. , 2016 ; Kipf & Welling , 2017 ) . In this paper , we study the characteristics of Graph Deconvolutional Networks ( GDNs ) , and observe de-noising to be the key for effective deconvolutional operations . Therefore , we propose a wavelet-based module ( Hammond et al. , 2011 ) that serves as a de-noising mechanism after the signals reconstructed in the spectral domain ( Shuman et al. , 2013 ) for deconvolutional networks . Most GCNs proposed by prior arts , e.g. , Cheby-GCN ( Defferrard et al. , 2016 ) and GCN ( Kipf & Welling , 2017 ) , exploit spectral graph convolutions ( Shuman et al. , 2013 ) and Chebyshev polynomials ( Hammond et al. , 2011 ) to retain coarse-grained information and avoid explicit eigendecomposition of the graph Laplacian . Until recently , Wu et al . ( 2019 ) and Donnat et al . ( 2018 ) have noticed that GCN acts as a low pass filter in spectral domain and retains smoothed representations . Inspired by prior arts in the domain of signal deconvolution ( Kundur & Hatzinakos , 1996 ) , we propose to design a GDN by using high pass filters as the counterpart of low pass filters embodied in GCNs . Due to the nature of signal deconvolution being ill-posed , several prior arts ( Donoho & Johnstone , 1994 ; Figueiredo & Nowak , 2003 ) rely on transforming these signals into another domain ( e.g. , spectral domain ) where the problem can be better posed and resolved . Furthermore , Neelamani et al . ( 2004 ) observe inverse filters in spectral domain may amplify the noise , and we observe the same phenomenon for GDNs . Therefore , inspired by their proposed hybrid spectralwavelet method—inverse signal reconstruction in spectral domain followed by a de-noising step in wavelet domain—we introduce a spectral-wavelet GDN to decode the smoothed representations into the input graph signals . The proposed spectral-wavelet GDN employs spectral graph convolutions with a high pass filter to obtain inversed signals and then de-noises the inversed signals in wavelet domain . In addition , we apply Maclaurin series as a fast approximation technique to compute both high pass filters and wavelet kernels ( Donnat et al. , 2018 ) . With the proposed spectral-wavelet GDN , we further propose a graph autoencoder ( GAE ) framework that resembles the symmetric fashion of architectures ( Noh et al. , 2015 ) . We then evaluate the effectiveness of the proposed GAE framework with three popular and important tasks : unsupervised graph-level representation ( Sun et al. , 2020 ) , social recommendation ( Jamali & Ester , 2010 ) and graph generation . In the first task , the proposed GAE outperforms the state-of-the-arts on graph classification in an unsupervised fashion , along with a significant improvement on running time . In the second task , the performance of our proposed GAE is on par with the state-of-the-arts on the recommendation accuracy ; at the meantime , the proposed GAE demonstrates strong robustness against rating noises and achieves the best recommendation diversification ( Ziegler et al. , 2005 ) . In the third task , our proposed GDN can enhance the generation performance of popular variational autoencoder frameworks including VGAE ( Kipf & Welling , 2016 ) and Graphite ( Grover et al. , 2019 ) . 2 RELATED WORK . Deconvolutional networks The area of signal deconvolution ( Kundur & Hatzinakos , 1996 ) has a long history in the signal processing community and is about the process of estimating the true signals given the degraded or smoothed signal characteristics ( Banham & Katsaggelos , 1997 ) . Later deep learning studies ( Zeiler et al. , 2010 ; Noh et al. , 2015 ) consider deconvolutional networks as the opposite operation for Convolutional Neural Networks ( CNNs ) and have mainly focused on Euclidean structures , e.g. , image . Some work ( Dumoulin & Visin , 2016 ) notices Zeiler et al . ( 2010 ) is in essence a transposed convolution network as it differs from what is used in the signal processing community . For deconvolutional networks in non-Euclidean structures like graphs , the study is still sparse . Feizi et al . ( 2013 ) propose the network deconvolution as inferring the true network given partially observed structure . It relies on explicit eigen-decomposition and can not be used as the counterpart for GCN . Yang & Segarra ( 2018 ) formulate the deconvolution as a pre-processing step on the observed signals , in order to improve classification accuracy . Zhang et al . ( 2020 ) consider recovering graph signals from the latent representation . However , it just adopts the filter design used in GCN and sheds little insight into the internal operation of GDN . Graph autoencoders Since the introduction of Graph Neural Networks ( GNNs ) ( Kipf & Welling , 2017 ; Defferrard et al. , 2016 ) and autoencoders ( AEs ) , many studies ( Kipf & Welling , 2016 ; Grover et al. , 2019 ) have used GNNs and AEs to encode to and decode from latent representations . Recently graph pooling has emerged as a research topic that also contributes to the development of graph autoencoders . Common practices include DIFFPOOL ( Ying et al. , 2018 ) , SAGPool ( Lee et al. , 2019 ) , MinCut-Pool ( Bianchi et al. , 2020 ) . Although some encouraging progress has been achieved , there is still no work about graph deconvolution that can up-sample latent feature maps to restore their original resolutions ( Gao & Ji , 2019 ) . In this regard , current graph autoencoders bypass the difficulty via ( 1 ) non-parameterized decoders ( Kipf & Welling , 2016 ; Deng et al. , 2020 ; Li et al. , 2020 ) , ( 2 ) GCN decoders ( Grover et al. , 2019 ; Gao & Ji , 2019 ) , and ( 3 ) multilayer perceptron ( MLP ) decoders ( Simonovsky & Komodakis , 2018 ) . 3 GRAPH AUTOENCODER FRAMEWORK . Formally , we are given an undirected , unweighted graph G = ( V , A , X ) . V is the node set and N = |V | denotes the number of nodes . The adjacency matrix A ∈ RN×N represents the graph structure . The feature matrix X ∈ RN×d represents the node attributes . Our goal is to learn an encoder and a decoder to map between the space of graph G and their latent factors Gpool = ( V pool , Apool , Z ) . We show a schematic diagram of our proposed framework in Figure 1 . 3.1 ENCODER . Our encoder consists of several layers of Graph Convolutional Networks ( GCNs ) ( Kipf & Welling , 2017 ) and a pooling layer , to produce coarser representations of the input graphs . Convolution The convolutional layers are used to derive smoothed node representations , such that nodes that are similar in topological space should be close enough in Euclidean space . H = GCN ( A , X ) , ( 1 ) where H ∈ RN×v denotes smoothed node representations . Specifically , Wu et al . ( 2019 ) show that GCN is a low pass filter in spectral domain with gc ( λi ) = 1−λi , where { λi } Ni=1 are the eigenvalues of the normalized Laplacian matrix Lsym = D− 1 2LD− 1 2 , L and D are the Laplacian and degree matrices of the input graph A respectively . Pooling We follow Lee et al . ( 2019 ) and Li et al . ( 2019 ) by using the self attention mechanism to pool the fine-grained graph into coarse-grained representations , S = softmax ( tanh ( HW1 ) W2 ) , ( 2 ) where W1 ∈ Rv×d and W2 ∈ Rd×K are two weight matrices , W1 is used for feature transformation and W2 is used to infer the membership of each node with respect to each cluster Vk . Similar to Ying et al . ( 2018 ) , we compute the coarsed graph structure Apool ∈ RK×K and feature representation Z ∈ RK×v as follows : Z = S > H ; Apool = S > AS . ( 3 ) Note here ( Z , Apool ) is size invariant and permutation invariant , as pointed out by Li et al . ( 2019 ) . 3.2 DECODER . Our decoder consists of an unpooling layer and several layers of Graph Deconvolutional Networks ( GDNs ) , to produce fine-grained graphs from the encoded Gpool . Unpooling We follow Bianchi et al . ( 2020 ) to upscale the coarsened graph back to its original size , H ′ = SZ ; A′ = SApoolS > . ( 4 ) Deconvolution The deconvolutional layers are used to recover the original graph features given smoothed node representations , X ′ = GDN ( A′ , H ′ ) , ( 5 ) where X ′ ∈ RN×d denotes the recovered graph features . We shall further discuss our design of GDN in Section 4 . 3.3 THE LOSS FUNCTION . The overall reconstruction loss is a weighted sum of structure reconstruction loss and feature reconstruction loss . L = λAf ( A , A′ ) + λXf ( X , X ′ ) , ( 6 ) where f ( · , · ) denotes a differential distance metric , e.g. , f ( · , · ) = MSE ( · , · ) for continuous input , and MSE ( · , · ) represents mean squared error . 4 GRAPH DECONVOLUTIONAL NETWORKS . In this section , we present our design of Graph Deconvolutional Networks ( GDNs ) . A naive deconvolutional nets can be obtained using the inverse operator g−1c ( λi ) = 1 1−λi in spectral domain . Unfortunately , inverse operation results in a high pass filter and may amplify the noise ( Donoho & Johnstone , 1994 ; Figueiredo & Nowak , 2003 ) . In this regard , we propose an efficient , hybrid spectral-wavelet deconvolutional network that performs inverse signal recovery in spectral domain first , and then conducts a de-noising step in wavelet domain to remove the amplified noise ( Neelamani et al. , 2004 ) . 4.1 INVERSE OF GCN . In order to recover graph signals from the latent representation computed by GCN encoder , we proposed a naive approach–inverse GCN with the inverse filter as g−1c ( λi ) = 1 1−λi in spectral domain . The spectral graph convolution on a signal x ∈ RN is defined as : g−1c ∗ x = Udiag ( g−1c ( λ1 ) , . . . , g−1c ( λN ) ) U > x = Ug−1c ( Λ ) U > x , ( 7 ) where U is the eigenvector matrix of the normalized graph Laplacian Lsym = UΛU > . Then , we apply Maclaurin series approximation on g−1c ( Λ ) = ∑∞ n=0 Λ n and get a fast algorithm as below : g−1c ∗ x = U ∑∞ n=0 Λ nU > x = ∑∞ n=0 L n symx . ( 8 ) As in GCN ( Kipf & Welling , 2017 ) , when the first order approximation is used to address overfitting , we derive a spectral filter with g−1c ( λi ) = 1 + λi , which is apparently a high pass filter . Following GCN ( Kipf & Welling , 2017 ) , a feature transformation is applied to increase filter strength . Recap the GDN in Section 3.2 , the inverse version of GCN can be written as : M = ( IN + L ′ sym ) H ′W3 , ( 9 ) where L′sym is the corresponding normalized graph Laplacian matrix for A ′ , H ′ is the smoothed representations and W3 is the parameter set to be learned . Compared with directly using GCN for signal reconstruction in Zhang et al . ( 2020 ) , the proposed inverse GCN demonstrates its efficacy in recovering the high frequency signals of the graph , as shown in Figure 2 ( b ) and ( d ) . We shall further discuss this point in Section 4.3 .
The main contribution of this paper is that the authors design a graph deconvolutional network that combines inverse filters in the spectral domain and de-noising layers in the wavelet domain. Further graph autoencoders are proposed based on the graph convolutional networks and the graph deconvolutional networks. Many experiments are done and many previous methods are compared to show the effectiveness of the proposed networks.
SP:00af76b0f241598c5d3c11fc330d6426a1dcd473
Graph Autoencoders with Deconvolutional Networks
1 INTRODUCTION . Autoencoders have demonstrated excellent performance on tasks such as unsupervised representation learning ( Bengio , 2009 ) and de-noising ( Vincent et al. , 2010 ) . Recently , several studies ( Zeiler & Fergus , 2014 ; Long et al. , 2015 ) have demonstrated that the performance of autoencoders can be further improved by encoding with Convolutional Networks and decoding with Deconvolutional Networks ( Zeiler et al. , 2010 ) . Notably , Noh et al . ( 2015 ) present a novel symmetric architecture that provides a bottom-up mapping from input signals to latent hierarchical feature space with { convolution , pooling } operations and then maps the latent representation back to the input space with { deconvolution , unpooling } operations . While this architecture has been successful when processing features with structures existed in the Euclidean space ( e.g. , images ) , recently there has been a surging interest in applying such a framework on non-Euclidean data like graphs . However , extending this autoencoder framework to graph-structured data requires Graph Deconvolutional operations , which remains open-ended and hasn ’ t been well-studied as opposed to the large body of works that have already been proposed for Graph Convolutional Networks ( Defferrard et al. , 2016 ; Kipf & Welling , 2017 ) . In this paper , we study the characteristics of Graph Deconvolutional Networks ( GDNs ) , and observe de-noising to be the key for effective deconvolutional operations . Therefore , we propose a wavelet-based module ( Hammond et al. , 2011 ) that serves as a de-noising mechanism after the signals reconstructed in the spectral domain ( Shuman et al. , 2013 ) for deconvolutional networks . Most GCNs proposed by prior arts , e.g. , Cheby-GCN ( Defferrard et al. , 2016 ) and GCN ( Kipf & Welling , 2017 ) , exploit spectral graph convolutions ( Shuman et al. , 2013 ) and Chebyshev polynomials ( Hammond et al. , 2011 ) to retain coarse-grained information and avoid explicit eigendecomposition of the graph Laplacian . Until recently , Wu et al . ( 2019 ) and Donnat et al . ( 2018 ) have noticed that GCN acts as a low pass filter in spectral domain and retains smoothed representations . Inspired by prior arts in the domain of signal deconvolution ( Kundur & Hatzinakos , 1996 ) , we propose to design a GDN by using high pass filters as the counterpart of low pass filters embodied in GCNs . Due to the nature of signal deconvolution being ill-posed , several prior arts ( Donoho & Johnstone , 1994 ; Figueiredo & Nowak , 2003 ) rely on transforming these signals into another domain ( e.g. , spectral domain ) where the problem can be better posed and resolved . Furthermore , Neelamani et al . ( 2004 ) observe inverse filters in spectral domain may amplify the noise , and we observe the same phenomenon for GDNs . Therefore , inspired by their proposed hybrid spectralwavelet method—inverse signal reconstruction in spectral domain followed by a de-noising step in wavelet domain—we introduce a spectral-wavelet GDN to decode the smoothed representations into the input graph signals . The proposed spectral-wavelet GDN employs spectral graph convolutions with a high pass filter to obtain inversed signals and then de-noises the inversed signals in wavelet domain . In addition , we apply Maclaurin series as a fast approximation technique to compute both high pass filters and wavelet kernels ( Donnat et al. , 2018 ) . With the proposed spectral-wavelet GDN , we further propose a graph autoencoder ( GAE ) framework that resembles the symmetric fashion of architectures ( Noh et al. , 2015 ) . We then evaluate the effectiveness of the proposed GAE framework with three popular and important tasks : unsupervised graph-level representation ( Sun et al. , 2020 ) , social recommendation ( Jamali & Ester , 2010 ) and graph generation . In the first task , the proposed GAE outperforms the state-of-the-arts on graph classification in an unsupervised fashion , along with a significant improvement on running time . In the second task , the performance of our proposed GAE is on par with the state-of-the-arts on the recommendation accuracy ; at the meantime , the proposed GAE demonstrates strong robustness against rating noises and achieves the best recommendation diversification ( Ziegler et al. , 2005 ) . In the third task , our proposed GDN can enhance the generation performance of popular variational autoencoder frameworks including VGAE ( Kipf & Welling , 2016 ) and Graphite ( Grover et al. , 2019 ) . 2 RELATED WORK . Deconvolutional networks The area of signal deconvolution ( Kundur & Hatzinakos , 1996 ) has a long history in the signal processing community and is about the process of estimating the true signals given the degraded or smoothed signal characteristics ( Banham & Katsaggelos , 1997 ) . Later deep learning studies ( Zeiler et al. , 2010 ; Noh et al. , 2015 ) consider deconvolutional networks as the opposite operation for Convolutional Neural Networks ( CNNs ) and have mainly focused on Euclidean structures , e.g. , image . Some work ( Dumoulin & Visin , 2016 ) notices Zeiler et al . ( 2010 ) is in essence a transposed convolution network as it differs from what is used in the signal processing community . For deconvolutional networks in non-Euclidean structures like graphs , the study is still sparse . Feizi et al . ( 2013 ) propose the network deconvolution as inferring the true network given partially observed structure . It relies on explicit eigen-decomposition and can not be used as the counterpart for GCN . Yang & Segarra ( 2018 ) formulate the deconvolution as a pre-processing step on the observed signals , in order to improve classification accuracy . Zhang et al . ( 2020 ) consider recovering graph signals from the latent representation . However , it just adopts the filter design used in GCN and sheds little insight into the internal operation of GDN . Graph autoencoders Since the introduction of Graph Neural Networks ( GNNs ) ( Kipf & Welling , 2017 ; Defferrard et al. , 2016 ) and autoencoders ( AEs ) , many studies ( Kipf & Welling , 2016 ; Grover et al. , 2019 ) have used GNNs and AEs to encode to and decode from latent representations . Recently graph pooling has emerged as a research topic that also contributes to the development of graph autoencoders . Common practices include DIFFPOOL ( Ying et al. , 2018 ) , SAGPool ( Lee et al. , 2019 ) , MinCut-Pool ( Bianchi et al. , 2020 ) . Although some encouraging progress has been achieved , there is still no work about graph deconvolution that can up-sample latent feature maps to restore their original resolutions ( Gao & Ji , 2019 ) . In this regard , current graph autoencoders bypass the difficulty via ( 1 ) non-parameterized decoders ( Kipf & Welling , 2016 ; Deng et al. , 2020 ; Li et al. , 2020 ) , ( 2 ) GCN decoders ( Grover et al. , 2019 ; Gao & Ji , 2019 ) , and ( 3 ) multilayer perceptron ( MLP ) decoders ( Simonovsky & Komodakis , 2018 ) . 3 GRAPH AUTOENCODER FRAMEWORK . Formally , we are given an undirected , unweighted graph G = ( V , A , X ) . V is the node set and N = |V | denotes the number of nodes . The adjacency matrix A ∈ RN×N represents the graph structure . The feature matrix X ∈ RN×d represents the node attributes . Our goal is to learn an encoder and a decoder to map between the space of graph G and their latent factors Gpool = ( V pool , Apool , Z ) . We show a schematic diagram of our proposed framework in Figure 1 . 3.1 ENCODER . Our encoder consists of several layers of Graph Convolutional Networks ( GCNs ) ( Kipf & Welling , 2017 ) and a pooling layer , to produce coarser representations of the input graphs . Convolution The convolutional layers are used to derive smoothed node representations , such that nodes that are similar in topological space should be close enough in Euclidean space . H = GCN ( A , X ) , ( 1 ) where H ∈ RN×v denotes smoothed node representations . Specifically , Wu et al . ( 2019 ) show that GCN is a low pass filter in spectral domain with gc ( λi ) = 1−λi , where { λi } Ni=1 are the eigenvalues of the normalized Laplacian matrix Lsym = D− 1 2LD− 1 2 , L and D are the Laplacian and degree matrices of the input graph A respectively . Pooling We follow Lee et al . ( 2019 ) and Li et al . ( 2019 ) by using the self attention mechanism to pool the fine-grained graph into coarse-grained representations , S = softmax ( tanh ( HW1 ) W2 ) , ( 2 ) where W1 ∈ Rv×d and W2 ∈ Rd×K are two weight matrices , W1 is used for feature transformation and W2 is used to infer the membership of each node with respect to each cluster Vk . Similar to Ying et al . ( 2018 ) , we compute the coarsed graph structure Apool ∈ RK×K and feature representation Z ∈ RK×v as follows : Z = S > H ; Apool = S > AS . ( 3 ) Note here ( Z , Apool ) is size invariant and permutation invariant , as pointed out by Li et al . ( 2019 ) . 3.2 DECODER . Our decoder consists of an unpooling layer and several layers of Graph Deconvolutional Networks ( GDNs ) , to produce fine-grained graphs from the encoded Gpool . Unpooling We follow Bianchi et al . ( 2020 ) to upscale the coarsened graph back to its original size , H ′ = SZ ; A′ = SApoolS > . ( 4 ) Deconvolution The deconvolutional layers are used to recover the original graph features given smoothed node representations , X ′ = GDN ( A′ , H ′ ) , ( 5 ) where X ′ ∈ RN×d denotes the recovered graph features . We shall further discuss our design of GDN in Section 4 . 3.3 THE LOSS FUNCTION . The overall reconstruction loss is a weighted sum of structure reconstruction loss and feature reconstruction loss . L = λAf ( A , A′ ) + λXf ( X , X ′ ) , ( 6 ) where f ( · , · ) denotes a differential distance metric , e.g. , f ( · , · ) = MSE ( · , · ) for continuous input , and MSE ( · , · ) represents mean squared error . 4 GRAPH DECONVOLUTIONAL NETWORKS . In this section , we present our design of Graph Deconvolutional Networks ( GDNs ) . A naive deconvolutional nets can be obtained using the inverse operator g−1c ( λi ) = 1 1−λi in spectral domain . Unfortunately , inverse operation results in a high pass filter and may amplify the noise ( Donoho & Johnstone , 1994 ; Figueiredo & Nowak , 2003 ) . In this regard , we propose an efficient , hybrid spectral-wavelet deconvolutional network that performs inverse signal recovery in spectral domain first , and then conducts a de-noising step in wavelet domain to remove the amplified noise ( Neelamani et al. , 2004 ) . 4.1 INVERSE OF GCN . In order to recover graph signals from the latent representation computed by GCN encoder , we proposed a naive approach–inverse GCN with the inverse filter as g−1c ( λi ) = 1 1−λi in spectral domain . The spectral graph convolution on a signal x ∈ RN is defined as : g−1c ∗ x = Udiag ( g−1c ( λ1 ) , . . . , g−1c ( λN ) ) U > x = Ug−1c ( Λ ) U > x , ( 7 ) where U is the eigenvector matrix of the normalized graph Laplacian Lsym = UΛU > . Then , we apply Maclaurin series approximation on g−1c ( Λ ) = ∑∞ n=0 Λ n and get a fast algorithm as below : g−1c ∗ x = U ∑∞ n=0 Λ nU > x = ∑∞ n=0 L n symx . ( 8 ) As in GCN ( Kipf & Welling , 2017 ) , when the first order approximation is used to address overfitting , we derive a spectral filter with g−1c ( λi ) = 1 + λi , which is apparently a high pass filter . Following GCN ( Kipf & Welling , 2017 ) , a feature transformation is applied to increase filter strength . Recap the GDN in Section 3.2 , the inverse version of GCN can be written as : M = ( IN + L ′ sym ) H ′W3 , ( 9 ) where L′sym is the corresponding normalized graph Laplacian matrix for A ′ , H ′ is the smoothed representations and W3 is the parameter set to be learned . Compared with directly using GCN for signal reconstruction in Zhang et al . ( 2020 ) , the proposed inverse GCN demonstrates its efficacy in recovering the high frequency signals of the graph , as shown in Figure 2 ( b ) and ( d ) . We shall further discuss this point in Section 4.3 .
The authors proposed graph deconvolution layers (GDNs) and employ GDNs to learn graph embedding in a encoder-decoder framework. The authors performs inverse signal recovery in spectral domain and then conducts a de-noising step in wavelet domain to remove the amplified noise. The proposed method can outperform baseline methods on graph classification and social recommendation. However, there are some issuses should be solved.
SP:00af76b0f241598c5d3c11fc330d6426a1dcd473
AutoLRS: Automatic Learning-Rate Schedule by Bayesian Optimization on the Fly
The learning rate ( LR ) schedule is one of the most important hyper-parameters needing careful tuning in training DNNs . However , it is also one of the least automated parts of machine learning systems and usually costs significant manual effort and computing . Though there are pre-defined LR schedules and optimizers with adaptive LR , they introduce new hyperparameters that need to be tuned separately for different tasks/datasets . In this paper , we consider the question : Can we automatically tune the LR over the course of training without human involvement ? We propose an efficient method , AutoLRS , which automatically optimizes the LR for each training stage by modeling training dynamics . AutoLRS aims to find an LR applied to every τ steps that minimizes the resulted validation loss . We solve this black-box optimization on the fly by Bayesian optimization ( BO ) . However , collecting training instances for BO requires a system to evaluate each LR queried by BO ’ s acquisition function for τ steps , which is prohibitively expensive in practice . Instead , we apply each candidate LR for only τ ′ τ steps and train an exponential model to predict the validation loss after τ steps . This mutual-training process between BO and the loss-prediction model allows us to limit the training steps invested in the BO search . We demonstrate the advantages and the generality of AutoLRS through extensive experiments of training DNNs for tasks from diverse domains using different optimizers . The LR schedules auto-generated by AutoLRS lead to a speedup of 1.22× , 1.43× , and 1.5× when training ResNet-50 , Transformer , and BERT , respectively , compared to the LR schedules in their original papers , and an average speedup of 1.31× over state-of-the-art heavily-tuned LR schedules . 1 INTRODUCTION . In the regime of deep learning , the success of training largely depends on the choice of the learning rate ( LR ) schedule , since most optimizers will have difficulty traversing a non-smooth and non-convex loss landscape with multiple local minimums and possibly saddle points ( Kawaguchi , 2016 ; Jin et al. , 2017 ; Goodfellow et al. , 2016 ; Li et al. , 2018a ) . To achieve stable and fast convergence towards a solution with good generalization performance , one has to tune the LR schedules carefully for different tasks ( Nar & Sastry , 2018 ; Jastrzębski et al. , 2017 ) . This tuning is usually non-trivial and requires many trial-and-error iterations that are computationally expensive . Moreover , the randomness of the widely-used mini-batch stochastic gradient descent ( SGD ) may introduce more uncertainty and difficulty in the tuning process . For the same reasons , it is also hard to directly formulate the search of the LR schedule as a well-posed optimization problem and address it through standard optimization . The broadly-adopted strategy is to either pick one from a family of pre-defined LR schedules or apply an optimizer that has a built-in mechanism changing the LR adaptively . However , we have a limited number of choices for pre-defined LR schedules , most of which are simple functions such as exponent or cosine and thus can not perfectly align with the non-smooth loss landscape . The latter set of adaptive optimizers , e.g. , Adam ( Kingma & Ba , 2015 ) and Adadelta ( Zeiler , 2012 ) , are extended from convex optimization and rely on strong assumptions to make the convergence properties hold . Moreover , the methods in both categories introduce new hyper-parameters that have to be tuned separately for different tasks or datasets , requiring significant human involvement . In this paper , we study the question : can we automatically tune the LR over the course of training without human involvement ? At the beginning of every τ steps ( i.e. , a “ stage ” in our method ) , we seek to identify an LR that optimizes the validation loss ( i.e. , an empirical estimate of the generalization error ) at the end of the stage . To do so , we employ Bayesian optimization ( BO ) that treats the validation loss as a black-box function of LR . BO simultaneously updates a posterior estimation of the black-box function and searches for the best LR with respect to the posterior . This approach is , however , computationally expensive since estimating the posterior needs many ( input , output ) instances of the function , and acquiring each instance costs τ steps of training . We , therefore , develop a simple yet efficient approximation : for every LR that BO decides to evaluate , we train the model by using the LR for only τ ′ τ steps and use the validation loss over the τ ′ steps to train a time-series forecasting model that provides a prediction of the validation loss after τ steps . As we will show later , an exponential model suffices to produce accurate predictions when using a small τ ′ = τ/10 . Then , AutoLRS can allow BO to explore ten different LRs in each stage and still bound the total running time to approximately twice the training cost associated with the generated schedule , i.e. , the time spent to find the stage-specific LRs is roughly equal to the time spent training the model with the identified LRs . AutoLRS does not depend on a pre-defined LR schedule , dataset , or a specified task and is compatible with almost all optimizers . Hence , it can be generally deployed across a broad range of ML tasks without much human involvement or expensive tuning over choices of LR schedules and their hyperparameters . Moreover , since it directly minimizes the validation loss , it does not only accelerate the convergence but also improves the generalization compared to just minimizing the training loss . Furthermore , AutoLRS only needs to update two extremely light-weight models , i.e. , the BO posterior and the exponential forecasting model , and it is efficient in exploring the loss landscape . Hence , it does not result in notable extra costs in either memory or computation . Note that AutoLRS searches for better LRs based on the training dynamics , which can be seen as a form of selfsupervision . The interaction between BO and the forecasting model is an example of mutual learning , where one produces training data for the other . In experiments , we apply AutoLRS to train three representative DNNs widely used in practice , i.e. , ResNet-50 ( He et al. , 2016a ) on ImageNet classification ( Russakovsky et al. , 2015 ) ; Transformer ( Vaswani et al. , 2017 ) and BERT ( Devlin et al. , 2019 ) for NLP tasks . Though they have been extensively studied and have hand-tuned LR schedules , the LR schedules computed by AutoLRS are faster than the original , hand-tuned , LR schedules by 1.22× , 1.43× , and 1.5× for training ResNet-50 , Transformer , and BERT , respectively , in terms of the training steps used to update the DNN ( i.e. , excluding the costs of the LR/hyperparameter search ) . It meanwhile achieves test-set performance better or on par with state-of-the-art results . We also carefully hand-tuned two state-of-the-art learning rate schedules , CLR ( Smith , 2017 ) and SGDR ( Loshchilov & Hutter , 2017 ) , and conducted more than ten experiments with different CLR/SGDR hyperparameters on each model . AutoLRS still has an average speedup of 1.29× and 1.34× across the three models , in terms of training steps , compared to the best CLR and SGDR LR schedules , respectively . The AutoLRS implementation is available at https : //github.com/YuchenJin/autolrs . 2 RELATED WORK . Learning rate scheduling : In contrast to traditional LR schedules with a monotone decreasing sequence of LRs and multi-step LR schedule , a recent class of LR schedules propose to apply multiple cycles of LR decay . Cyclical Learning Rate ( CLR ) changes LR from a maximal LR ( ηmax ) to a minimal LR ( ηmin ) at a pre-defined frequency and achieves faster convergence for some DNNs ( Smith , 2017 ) . The approach requires a “ LR range test ” to estimate the minimal and maximal LR . The LR range test trains the model with a linearly-increasing LR between a low LR and a high LR , and finds the LR range ( [ ηmin , ηmax ] ) over which the training loss decreases . The authors proposed three variants of CLR : triangular2 that halves the maximum LR bound after each cycle ; exp_range that exponentially reduces the maximum LR bound after each cycle ; and 1cycle containing only one triangular cycle ( Smith , 2018 ) . Similar to CLR , Stochastic Gradient Descent with Warm Restarts ( SGDR ) restarts the LR and then applies cosine annealing/decay at a pre-defined frequency ( Loshchilov & Hutter , 2017 ) . Neither CLR or SGDR is automatic , because they are quite sensitive to their hyperparameters , which require careful hand-tuning . CLR and SGDR may even cause undesirable divergence in loss during training with suboptimal hyperparameters ( see §5 ) . Learning rate adaptation with hypergradient descent : Aiming for the same goal of automatically tuning the LR , the hypergradient based technique ( Almeida et al. , 1998 ; Franceschi et al. , 2017 ; Baydin et al. , 2018 ; Donini et al. , 2020 ) optimizes the LR schedule by applying gradient descent of the objective function w.r.t . the LR during training . In addition to the initial value of the regular LR , it introduces an additional hypergradient LR whose initial value is another hyperparameter to be specified . We experimentally show that this technique is subject to overfitting , it is quite sensitive to its two hyperparameters , and it is unable to match the state-of-the-art test-set performance on the models we test ( §A.5.1 ) . We also compare its performance against AutoLRS ( §A.5.2 ) . DNN hyperparameter optimization : Automatic hyperparameter searching for DNNs has been broadly studied in recent years . When applied to learning rates , they can determine an optimized value for LR that is kept constant ( or constrained to be a pre-defined shape ) through the entire training process , as opposed to determining an LR schedule . They can be primarily categorized into Bayesian optimization based approaches ( Hutter et al. , 2011 ; Snoek et al. , 2012 ; Bergstra et al. , 2013 ) , bandit-based solutions ( Li et al. , 2017 ; 2018b ) , hybrid approaches that combine bandit-based and Bayesian optimization based approaches ( Falkner et al. , 2018 ; Zela et al. , 2018 ) , and population-based methods ( Jaderberg et al. , 2017 ; Parker-Holder et al. , 2020 ) . It might be possible to extend these techniques to determine a LR schedule with an optimized LR for each training stage , but it is not sample-efficient and time-efficient to do so since the LR schedule would correspond to hundreds or thousands of hyperparameters . Optimization methods with adaptive LR : These optimizers can adaptively adjust LR for each training step by maintaining an estimate of a better learning rate separately for each parameter in the DNN . Adagrad ( Duchi et al. , 2011 ) applies lower LRs to parameters with larger accumulated gradients and higher learning rates to the ones with smaller accumulated gradients . RMSprop ( Tieleman & Hinton , 2012 ) , AdaDelta ( Zeiler , 2012 ) , and Adam ( Kingma & Ba , 2015 ) were later proposed to address the issue in Adagrad that the model stops learning due to the continual decay of LR . These optimizers with adaptive LR are orthogonal to our automatic LR scheduler , and they still require a global learning rate schedule , which can be obtained from our AutoLRS . In particular , their default hyperparameters do not always work well and need careful tuning , e.g. , Adam ’ s default LR 0.001 performs poorly in training BERT and Transformer , and a better-tuned LR schedule can significantly reduce the training time ( §5 ) . Recent optimization methods ( Schaul et al. , 2013 ; Mahsereci & Hennig , 2015 ) proposed to remove the need for LR tuning in SGD altogether , but they are not widely used potentially due to their limited applicability and sub-optimal performance ( Baydin et al. , 2018 ) .
This paper uses Bayesian optimization (BO) to dynamically tune the learning rate during the course of training of DNNs. In every stage of training, the algorithm firstly uses BO to explore different learning rates with the help of a parametric exponential model for learning rate extrapolation, and then applies the selected learning rate at the current stage. The algorithm is applied to the training of state-of-the-art DNN models, and is shown to outperform the original learning rate schedules, as well as other methods for learning rate scheduling.
SP:807c7df69d51b93b5a0da3ea56506a9bfadd0595
AutoLRS: Automatic Learning-Rate Schedule by Bayesian Optimization on the Fly
The learning rate ( LR ) schedule is one of the most important hyper-parameters needing careful tuning in training DNNs . However , it is also one of the least automated parts of machine learning systems and usually costs significant manual effort and computing . Though there are pre-defined LR schedules and optimizers with adaptive LR , they introduce new hyperparameters that need to be tuned separately for different tasks/datasets . In this paper , we consider the question : Can we automatically tune the LR over the course of training without human involvement ? We propose an efficient method , AutoLRS , which automatically optimizes the LR for each training stage by modeling training dynamics . AutoLRS aims to find an LR applied to every τ steps that minimizes the resulted validation loss . We solve this black-box optimization on the fly by Bayesian optimization ( BO ) . However , collecting training instances for BO requires a system to evaluate each LR queried by BO ’ s acquisition function for τ steps , which is prohibitively expensive in practice . Instead , we apply each candidate LR for only τ ′ τ steps and train an exponential model to predict the validation loss after τ steps . This mutual-training process between BO and the loss-prediction model allows us to limit the training steps invested in the BO search . We demonstrate the advantages and the generality of AutoLRS through extensive experiments of training DNNs for tasks from diverse domains using different optimizers . The LR schedules auto-generated by AutoLRS lead to a speedup of 1.22× , 1.43× , and 1.5× when training ResNet-50 , Transformer , and BERT , respectively , compared to the LR schedules in their original papers , and an average speedup of 1.31× over state-of-the-art heavily-tuned LR schedules . 1 INTRODUCTION . In the regime of deep learning , the success of training largely depends on the choice of the learning rate ( LR ) schedule , since most optimizers will have difficulty traversing a non-smooth and non-convex loss landscape with multiple local minimums and possibly saddle points ( Kawaguchi , 2016 ; Jin et al. , 2017 ; Goodfellow et al. , 2016 ; Li et al. , 2018a ) . To achieve stable and fast convergence towards a solution with good generalization performance , one has to tune the LR schedules carefully for different tasks ( Nar & Sastry , 2018 ; Jastrzębski et al. , 2017 ) . This tuning is usually non-trivial and requires many trial-and-error iterations that are computationally expensive . Moreover , the randomness of the widely-used mini-batch stochastic gradient descent ( SGD ) may introduce more uncertainty and difficulty in the tuning process . For the same reasons , it is also hard to directly formulate the search of the LR schedule as a well-posed optimization problem and address it through standard optimization . The broadly-adopted strategy is to either pick one from a family of pre-defined LR schedules or apply an optimizer that has a built-in mechanism changing the LR adaptively . However , we have a limited number of choices for pre-defined LR schedules , most of which are simple functions such as exponent or cosine and thus can not perfectly align with the non-smooth loss landscape . The latter set of adaptive optimizers , e.g. , Adam ( Kingma & Ba , 2015 ) and Adadelta ( Zeiler , 2012 ) , are extended from convex optimization and rely on strong assumptions to make the convergence properties hold . Moreover , the methods in both categories introduce new hyper-parameters that have to be tuned separately for different tasks or datasets , requiring significant human involvement . In this paper , we study the question : can we automatically tune the LR over the course of training without human involvement ? At the beginning of every τ steps ( i.e. , a “ stage ” in our method ) , we seek to identify an LR that optimizes the validation loss ( i.e. , an empirical estimate of the generalization error ) at the end of the stage . To do so , we employ Bayesian optimization ( BO ) that treats the validation loss as a black-box function of LR . BO simultaneously updates a posterior estimation of the black-box function and searches for the best LR with respect to the posterior . This approach is , however , computationally expensive since estimating the posterior needs many ( input , output ) instances of the function , and acquiring each instance costs τ steps of training . We , therefore , develop a simple yet efficient approximation : for every LR that BO decides to evaluate , we train the model by using the LR for only τ ′ τ steps and use the validation loss over the τ ′ steps to train a time-series forecasting model that provides a prediction of the validation loss after τ steps . As we will show later , an exponential model suffices to produce accurate predictions when using a small τ ′ = τ/10 . Then , AutoLRS can allow BO to explore ten different LRs in each stage and still bound the total running time to approximately twice the training cost associated with the generated schedule , i.e. , the time spent to find the stage-specific LRs is roughly equal to the time spent training the model with the identified LRs . AutoLRS does not depend on a pre-defined LR schedule , dataset , or a specified task and is compatible with almost all optimizers . Hence , it can be generally deployed across a broad range of ML tasks without much human involvement or expensive tuning over choices of LR schedules and their hyperparameters . Moreover , since it directly minimizes the validation loss , it does not only accelerate the convergence but also improves the generalization compared to just minimizing the training loss . Furthermore , AutoLRS only needs to update two extremely light-weight models , i.e. , the BO posterior and the exponential forecasting model , and it is efficient in exploring the loss landscape . Hence , it does not result in notable extra costs in either memory or computation . Note that AutoLRS searches for better LRs based on the training dynamics , which can be seen as a form of selfsupervision . The interaction between BO and the forecasting model is an example of mutual learning , where one produces training data for the other . In experiments , we apply AutoLRS to train three representative DNNs widely used in practice , i.e. , ResNet-50 ( He et al. , 2016a ) on ImageNet classification ( Russakovsky et al. , 2015 ) ; Transformer ( Vaswani et al. , 2017 ) and BERT ( Devlin et al. , 2019 ) for NLP tasks . Though they have been extensively studied and have hand-tuned LR schedules , the LR schedules computed by AutoLRS are faster than the original , hand-tuned , LR schedules by 1.22× , 1.43× , and 1.5× for training ResNet-50 , Transformer , and BERT , respectively , in terms of the training steps used to update the DNN ( i.e. , excluding the costs of the LR/hyperparameter search ) . It meanwhile achieves test-set performance better or on par with state-of-the-art results . We also carefully hand-tuned two state-of-the-art learning rate schedules , CLR ( Smith , 2017 ) and SGDR ( Loshchilov & Hutter , 2017 ) , and conducted more than ten experiments with different CLR/SGDR hyperparameters on each model . AutoLRS still has an average speedup of 1.29× and 1.34× across the three models , in terms of training steps , compared to the best CLR and SGDR LR schedules , respectively . The AutoLRS implementation is available at https : //github.com/YuchenJin/autolrs . 2 RELATED WORK . Learning rate scheduling : In contrast to traditional LR schedules with a monotone decreasing sequence of LRs and multi-step LR schedule , a recent class of LR schedules propose to apply multiple cycles of LR decay . Cyclical Learning Rate ( CLR ) changes LR from a maximal LR ( ηmax ) to a minimal LR ( ηmin ) at a pre-defined frequency and achieves faster convergence for some DNNs ( Smith , 2017 ) . The approach requires a “ LR range test ” to estimate the minimal and maximal LR . The LR range test trains the model with a linearly-increasing LR between a low LR and a high LR , and finds the LR range ( [ ηmin , ηmax ] ) over which the training loss decreases . The authors proposed three variants of CLR : triangular2 that halves the maximum LR bound after each cycle ; exp_range that exponentially reduces the maximum LR bound after each cycle ; and 1cycle containing only one triangular cycle ( Smith , 2018 ) . Similar to CLR , Stochastic Gradient Descent with Warm Restarts ( SGDR ) restarts the LR and then applies cosine annealing/decay at a pre-defined frequency ( Loshchilov & Hutter , 2017 ) . Neither CLR or SGDR is automatic , because they are quite sensitive to their hyperparameters , which require careful hand-tuning . CLR and SGDR may even cause undesirable divergence in loss during training with suboptimal hyperparameters ( see §5 ) . Learning rate adaptation with hypergradient descent : Aiming for the same goal of automatically tuning the LR , the hypergradient based technique ( Almeida et al. , 1998 ; Franceschi et al. , 2017 ; Baydin et al. , 2018 ; Donini et al. , 2020 ) optimizes the LR schedule by applying gradient descent of the objective function w.r.t . the LR during training . In addition to the initial value of the regular LR , it introduces an additional hypergradient LR whose initial value is another hyperparameter to be specified . We experimentally show that this technique is subject to overfitting , it is quite sensitive to its two hyperparameters , and it is unable to match the state-of-the-art test-set performance on the models we test ( §A.5.1 ) . We also compare its performance against AutoLRS ( §A.5.2 ) . DNN hyperparameter optimization : Automatic hyperparameter searching for DNNs has been broadly studied in recent years . When applied to learning rates , they can determine an optimized value for LR that is kept constant ( or constrained to be a pre-defined shape ) through the entire training process , as opposed to determining an LR schedule . They can be primarily categorized into Bayesian optimization based approaches ( Hutter et al. , 2011 ; Snoek et al. , 2012 ; Bergstra et al. , 2013 ) , bandit-based solutions ( Li et al. , 2017 ; 2018b ) , hybrid approaches that combine bandit-based and Bayesian optimization based approaches ( Falkner et al. , 2018 ; Zela et al. , 2018 ) , and population-based methods ( Jaderberg et al. , 2017 ; Parker-Holder et al. , 2020 ) . It might be possible to extend these techniques to determine a LR schedule with an optimized LR for each training stage , but it is not sample-efficient and time-efficient to do so since the LR schedule would correspond to hundreds or thousands of hyperparameters . Optimization methods with adaptive LR : These optimizers can adaptively adjust LR for each training step by maintaining an estimate of a better learning rate separately for each parameter in the DNN . Adagrad ( Duchi et al. , 2011 ) applies lower LRs to parameters with larger accumulated gradients and higher learning rates to the ones with smaller accumulated gradients . RMSprop ( Tieleman & Hinton , 2012 ) , AdaDelta ( Zeiler , 2012 ) , and Adam ( Kingma & Ba , 2015 ) were later proposed to address the issue in Adagrad that the model stops learning due to the continual decay of LR . These optimizers with adaptive LR are orthogonal to our automatic LR scheduler , and they still require a global learning rate schedule , which can be obtained from our AutoLRS . In particular , their default hyperparameters do not always work well and need careful tuning , e.g. , Adam ’ s default LR 0.001 performs poorly in training BERT and Transformer , and a better-tuned LR schedule can significantly reduce the training time ( §5 ) . Recent optimization methods ( Schaul et al. , 2013 ; Mahsereci & Hennig , 2015 ) proposed to remove the need for LR tuning in SGD altogether , but they are not widely used potentially due to their limited applicability and sub-optimal performance ( Baydin et al. , 2018 ) .
Training deep neural networks is typically done using gradient-based methods with either pre-defined learning-rate schedules or off-the-shelf adaptive optimizers (such as Adam). The former can not reliably align with the non-linear loss landscape, while the latter add additional hyperparameters to tune. This paper proposes an algorithm for automatically tuning a learning-rate schedule. The method works by modelling the training dynamics and adapting the learning-rate to optimize performance on the validation set. The method does end up introducing additional hyperparameters,
SP:807c7df69d51b93b5a0da3ea56506a9bfadd0595
Ensembles of Generative Adversarial Networks for Disconnected Data
1 INTRODUCTION . Generative networks , such as generative adversarial networks ( GANs ) ( Goodfellow et al. , 2014 ) and variational autoencoders ( Kingma & Welling , 2013 ) , have shown impressive performance in generating highly realistic images that were not observed in the training set ( Karras et al. , 2017 ; 2019a ; b ) . However , even state of the art generative networks such as BigGAN ( Brock et al. , 2018 ) generate poor quality imagery if conditioned on certain classes of ILSVRC2012 ( Russakovsky et al. , 2015 ) . We argue that this is due to the inherent disconnected structure of the data . In this paper , we theoretically analyze the effects of disconnected data on GAN performance . By disconnected , we mean that the data points are drawn from an underlying topological space that is disconnected ( the rigorous definition is provided below in Section 3.1 ) . As an intuitive example , consider the collection of all images of badgers and all images of zebras . These two sets are disconnected , because images of badgers do not resemble images of zebras , and modeling the space connecting these sets does not represent real images of animals . We rigorously prove that one can not use a single continuous generative network to learn a data distribution perfectly under the disconnected data model . Because generative networks are continuous , they can not map a connected latent space ( R ` ) into the disconnected image space , resulting in the generation of data outside of the true data space . In related work , ( Khayatkhoei et al. , 2018 ) has empirically studied disconnected data but does not formally prove the results in this paper . In addition , the authors use a completely unsupervised approach to attempt to find the disconnected components as a part of learning . In contrast , we use class labels and hence work in the supervised learning regime . Our suggested approach to best deal with disconnected data is to use ensembles of GANs . We study GANs in particular for concreteness and because of their widespread application ; however , our methods can be extended to other generative networks with some modification . Ensembles of GANs are not new , e.g. , see ( Nguyen et al. , 2017 ; Ghosh et al. , 2018 ; Tolstikhin et al. , 2017 ; Arora et al. , 2017 ) , but there has been limited theoretical study of their properties . We prove that ensembles can learn the data distribution under the disconnected data assumption and study their relationship to single GANs . Specifically , we develop a first-of-its-kind theoretic framework that relates single GANs , ensembles of GANs , conditional GANs , and Gaussian mixture GANs . The framework makes it easy to , e.g. , develop regularized GAN ensembles that encourage parameter sharing , which we show outperform cGANs and single GANs . While our primary focus here is on theoretical insight , we also conduct a range of experiments to demonstrate empirically that the performance ( measured in terms of FID ( Heusel et al. , 2017 ) , MSE to the training set ( Metz et al. , 2016 ) , Precision , and Recall ( Sajjadi et al. , 2018 ) ) increases when we use an ensemble of WGANs over a single WGAN on the CIFAR-10 dataset ( Krizhevsky & Hinton , 2009 ) . The performance increase can be explained in terms of three contributing factors : 1 ) the ensemble has more parameters and hence has higher capacity to learn complex distributions , 2 ) the ensemble better captures the disconnected structure of the data , and 3 ) parameter sharing among ensemble networks enables successful joint learning , which we observe can increase performance . We summarize our contributions as follows : • We prove that generative networks , which are continuous functions , can not learn the data distribution if the data is disconnected ( Section 3.2 ) . The disconnected data model is defined in Section 3.1 , where we argue that it is satisfied in many common datasets , such as MNIST , CIFAR10 , and ILSVRC2012 . Restricting the generator to a disconnected subset of the domain is one solution ( Section 3.3 ) , but we study a better solution : using ensembles . • We demonstrate how single GANs and ensembles are related ( Section 4.1 ) . We then prove that ensembles are able to learn the true data distribution under our disconnected data model ( Section 4.2 ) . Finally , we demonstrate that there is an equivalence between ensembles of GANs and common architectures such as cGANs and GM-GANs due to parameter sharing between ensemble components ( Section 4.3 ) . • We empirically show that , in general , an ensemble of GANs outperforms a single GAN ( Section 5.1 ) . This is true even if we reduce the number of parameters used in an ensemble so that it has fewer total parameters than a single GAN ( Section 5.2 ) . Finally , we empirically show that parameter sharing among ensemble networks leads to better performance than a single GAN ( Section 5.3 ) or even a cGAN ( Section 5.4 ) . 2 BACKGROUND AND RELATED WORK . 2.1 GENERATIVE ADVERSARIAL NETWORKS ( GANS ) . GANs are generative neural networks that use an adversarial loss , typically from another neural network ( Goodfellow et al. , 2014 ) . In other words , a GAN consists of two neural networks that compete against each other . The generator G : R ` Ñ Rp is a neural network that generates pdimensional images from an ` -dimensional latent space . The discriminator D : Rp Ñ p0 , 1q is a neural network which is trained to classify between the training set and generated images . As compositions of continuous functions ( Goodfellow et al. , 2016 ) , both G and D are continuous . G has parameters θG P R|θG| , where |θG| is the possibly infinite cardinality of θG . Similarly , D has parameters θD P R|θD| . The latent , generated , and data distributions are Pz , PG , and PX , respectively . We train this network by solving the following optimization problem : min θG max θD V pθG , θDq “ min θG max θD Ex „ PX rlogDpxqs ` Ez „ Pz rlogp1´DpGpzqqqs . ( 1 ) Here we write min and max instead of minimize and maximize for notational compactness , but we are referring to an optimization problem . The objective of this optimization is to learn the true data distribution , i.e. , PG “ PX . Alternatively , we can use the Wasserstein distance instead of the typical cross-entropy loss : V pθG , θDq “ Ex „ PXDpxq ´ Ez „ PzDpGpzqq restricted to those θG , θD which force D to be 1-Lipschitz as done in the WGAN paper ( Arjovsky et al. , 2017 ) . Thus , we will use V to denote either of these two objective functions . 2.2 GANS THAT TREAT SUBSETS OF DATA DIFFERENTLY . Ensembles of GANs . Datasets with many different classes , such as ILSVRC2012 ( Russakovsky et al. , 2015 ) , are harder to learn in part because the relationship between classes is difficult to quantify . Some models , such as AC-GANs ( Odena et al. , 2017 ) , tackle this complexity by training different models on different classes of data in a supervised fashion . In the AC-GAN paper , the authors train 100 GANs on the 1000 classes of ILSVRC2012 . The need for these ensembles is not theoretically studied or justified beyond their intuitive usefulness . Several ensembles of GANs have been studied in the unsupervised setting , where the modes or disconnected subsets of the latent space are typically learned ( Pandeva & Schubert , 2019 ; Hoang et al. , 2018 ; Khayatkhoei et al. , 2018 ) with some information theoretic regularization as done in ( Chen et al. , 2016 ) . These are unsupervised approaches which we do not study in this paper . Models such as SGAN ( Chavdarova & Fleuret , 2018 ) and standard GAN ensembles ( Wang et al. , 2016 ) use several GANs in part to increase the capacity or expressibility of GANs . Other ensembles , such as Dropout-GAN ( Mordido et al. , 2018 ) , help increase robustness of the generative network . Conditional GANs ( cGANs ) . Conditional GANs ( Mirza & Osindero , 2014 ) attempt to solve the optimization problem in ( 1 ) by conditioning on the class y , a one-hot vector . The generator and discriminator both take y as an additional input . This conditioning can be implemented by having the latent variable be part of the input , e.g. , the input to the generator will be rzT yT sT instead of just z . Typically , conventional cGANs have the following architecture modification . The first layer has an additive bias that depends on the class vector y and the rest is the same . For example , consider a multilayer perceptron , with matrix W in the first layer . Converting this network to be conditional would result in the following modification to the matrix in the first layer : Wconditional „ x y “ rW Bs „ x y “ Wx ` By “ Wx ` B¨ , k . Hence , we can think ofB as a matrix with columnsB¨ , k , k P t1 , . . . , Ku being bias vectors andW being the same as before . We pick a bias vector B¨ , k based on what class we are conditioning on but the other parameters of the network are held the same , independent of k. This is done to both the generator and the discriminator . Some cGANs condition on multiple layers , such as BigGAN ( Brock et al. , 2018 ) , or on different types of layers , such as convolutional layers , but our formulation here extends clearly to those other architectures . Gaussian Mixture GANs ( GM-GANs ) . The latent distribution Pz is typically chosen to be either uniform , isotropic Gaussian , or truncated isotropic Gaussian ( Goodfellow et al. , 2014 ; Radford et al. , 2015 ; Brock et al. , 2018 ) . We are not restricted to these distributions ; research has been conducted in extending and studying the affect of using different distributions , such as a mixture of Gaussians ( Ben-Yosef & Weinshall , 2018 ; Gurumurthy et al. , 2017 ) . 3 CONTINUOUS GENERATIVE NETWORKS CAN NOT MODEL DISTRIBUTIONS DRAWN FROM DISCONNECTED DATA . 3.1 DISCONNECTED DATA MODEL . We begin by introducing a new data model that accounts for disconnected data . Typical datasets with class labels satisfy this model ; we provide additional examples below . Definition 1 ( Disconnected data model ) . We assume that the data lies on K disjoint , compact sets Xk Ă Rp , k P t1 , . . . , Ku so that the whole data lies on the disjoint union of each component : Ť ¨ Kk “ 1 Xk “ X . Moreover , we assume that each component Xk is connected ( Rudin , 1964 ) . We then draw data points from these sets in order to construct our finite datasets . In Definition 1 , we let each Xk be compact in order to remove the degenerate case of having two components Xk and Xj that are arbitrarily close to one another , which is possible if we only assume that X is closed and disjoint . If that is the case , there are trivial counter-examples ( see the appendix ) to the theorems proved below . Lemma 1 . X is a disconnected set , and Xj is disconnected from Xk for j ‰ k. Disconnected datasets are ubiquitous in machine learning ( Khayatkhoei et al. , 2018 ; Hoang et al. , 2018 ; Pandeva & Schubert , 2019 ) . For example , datasets with discrete labels ( typical in classification problems ) will often be disconnected . We study this disconnected data property , because generative networks are unable to learn the distribution supported on such a dataset , as we show below .
This work proposes that most relevant datasets to the machine learning community today have support on a mixture of disconnected components. They argue that popular GAN models cannot fit distributions of this kind and provide a number of proofs to convince the reader of this claim. The authors discuss a number of simple modifications on top of standard GANs which can alleviate this issue. These include replacing the standard unimodal latent distribution of the generator with a mixture model and replacing the standard generator with an ensemble of generators. The authors demonstrate that by replacing the standard GAN with an ensemble (or a pseudo-ensemble) they can achieve improved performance over a standard GAN given a fixed parameter budget with respect to a number of different evaluation metrics.
SP:cfcc0751394443ee8179098ee4fec54128f668a3
Ensembles of Generative Adversarial Networks for Disconnected Data
1 INTRODUCTION . Generative networks , such as generative adversarial networks ( GANs ) ( Goodfellow et al. , 2014 ) and variational autoencoders ( Kingma & Welling , 2013 ) , have shown impressive performance in generating highly realistic images that were not observed in the training set ( Karras et al. , 2017 ; 2019a ; b ) . However , even state of the art generative networks such as BigGAN ( Brock et al. , 2018 ) generate poor quality imagery if conditioned on certain classes of ILSVRC2012 ( Russakovsky et al. , 2015 ) . We argue that this is due to the inherent disconnected structure of the data . In this paper , we theoretically analyze the effects of disconnected data on GAN performance . By disconnected , we mean that the data points are drawn from an underlying topological space that is disconnected ( the rigorous definition is provided below in Section 3.1 ) . As an intuitive example , consider the collection of all images of badgers and all images of zebras . These two sets are disconnected , because images of badgers do not resemble images of zebras , and modeling the space connecting these sets does not represent real images of animals . We rigorously prove that one can not use a single continuous generative network to learn a data distribution perfectly under the disconnected data model . Because generative networks are continuous , they can not map a connected latent space ( R ` ) into the disconnected image space , resulting in the generation of data outside of the true data space . In related work , ( Khayatkhoei et al. , 2018 ) has empirically studied disconnected data but does not formally prove the results in this paper . In addition , the authors use a completely unsupervised approach to attempt to find the disconnected components as a part of learning . In contrast , we use class labels and hence work in the supervised learning regime . Our suggested approach to best deal with disconnected data is to use ensembles of GANs . We study GANs in particular for concreteness and because of their widespread application ; however , our methods can be extended to other generative networks with some modification . Ensembles of GANs are not new , e.g. , see ( Nguyen et al. , 2017 ; Ghosh et al. , 2018 ; Tolstikhin et al. , 2017 ; Arora et al. , 2017 ) , but there has been limited theoretical study of their properties . We prove that ensembles can learn the data distribution under the disconnected data assumption and study their relationship to single GANs . Specifically , we develop a first-of-its-kind theoretic framework that relates single GANs , ensembles of GANs , conditional GANs , and Gaussian mixture GANs . The framework makes it easy to , e.g. , develop regularized GAN ensembles that encourage parameter sharing , which we show outperform cGANs and single GANs . While our primary focus here is on theoretical insight , we also conduct a range of experiments to demonstrate empirically that the performance ( measured in terms of FID ( Heusel et al. , 2017 ) , MSE to the training set ( Metz et al. , 2016 ) , Precision , and Recall ( Sajjadi et al. , 2018 ) ) increases when we use an ensemble of WGANs over a single WGAN on the CIFAR-10 dataset ( Krizhevsky & Hinton , 2009 ) . The performance increase can be explained in terms of three contributing factors : 1 ) the ensemble has more parameters and hence has higher capacity to learn complex distributions , 2 ) the ensemble better captures the disconnected structure of the data , and 3 ) parameter sharing among ensemble networks enables successful joint learning , which we observe can increase performance . We summarize our contributions as follows : • We prove that generative networks , which are continuous functions , can not learn the data distribution if the data is disconnected ( Section 3.2 ) . The disconnected data model is defined in Section 3.1 , where we argue that it is satisfied in many common datasets , such as MNIST , CIFAR10 , and ILSVRC2012 . Restricting the generator to a disconnected subset of the domain is one solution ( Section 3.3 ) , but we study a better solution : using ensembles . • We demonstrate how single GANs and ensembles are related ( Section 4.1 ) . We then prove that ensembles are able to learn the true data distribution under our disconnected data model ( Section 4.2 ) . Finally , we demonstrate that there is an equivalence between ensembles of GANs and common architectures such as cGANs and GM-GANs due to parameter sharing between ensemble components ( Section 4.3 ) . • We empirically show that , in general , an ensemble of GANs outperforms a single GAN ( Section 5.1 ) . This is true even if we reduce the number of parameters used in an ensemble so that it has fewer total parameters than a single GAN ( Section 5.2 ) . Finally , we empirically show that parameter sharing among ensemble networks leads to better performance than a single GAN ( Section 5.3 ) or even a cGAN ( Section 5.4 ) . 2 BACKGROUND AND RELATED WORK . 2.1 GENERATIVE ADVERSARIAL NETWORKS ( GANS ) . GANs are generative neural networks that use an adversarial loss , typically from another neural network ( Goodfellow et al. , 2014 ) . In other words , a GAN consists of two neural networks that compete against each other . The generator G : R ` Ñ Rp is a neural network that generates pdimensional images from an ` -dimensional latent space . The discriminator D : Rp Ñ p0 , 1q is a neural network which is trained to classify between the training set and generated images . As compositions of continuous functions ( Goodfellow et al. , 2016 ) , both G and D are continuous . G has parameters θG P R|θG| , where |θG| is the possibly infinite cardinality of θG . Similarly , D has parameters θD P R|θD| . The latent , generated , and data distributions are Pz , PG , and PX , respectively . We train this network by solving the following optimization problem : min θG max θD V pθG , θDq “ min θG max θD Ex „ PX rlogDpxqs ` Ez „ Pz rlogp1´DpGpzqqqs . ( 1 ) Here we write min and max instead of minimize and maximize for notational compactness , but we are referring to an optimization problem . The objective of this optimization is to learn the true data distribution , i.e. , PG “ PX . Alternatively , we can use the Wasserstein distance instead of the typical cross-entropy loss : V pθG , θDq “ Ex „ PXDpxq ´ Ez „ PzDpGpzqq restricted to those θG , θD which force D to be 1-Lipschitz as done in the WGAN paper ( Arjovsky et al. , 2017 ) . Thus , we will use V to denote either of these two objective functions . 2.2 GANS THAT TREAT SUBSETS OF DATA DIFFERENTLY . Ensembles of GANs . Datasets with many different classes , such as ILSVRC2012 ( Russakovsky et al. , 2015 ) , are harder to learn in part because the relationship between classes is difficult to quantify . Some models , such as AC-GANs ( Odena et al. , 2017 ) , tackle this complexity by training different models on different classes of data in a supervised fashion . In the AC-GAN paper , the authors train 100 GANs on the 1000 classes of ILSVRC2012 . The need for these ensembles is not theoretically studied or justified beyond their intuitive usefulness . Several ensembles of GANs have been studied in the unsupervised setting , where the modes or disconnected subsets of the latent space are typically learned ( Pandeva & Schubert , 2019 ; Hoang et al. , 2018 ; Khayatkhoei et al. , 2018 ) with some information theoretic regularization as done in ( Chen et al. , 2016 ) . These are unsupervised approaches which we do not study in this paper . Models such as SGAN ( Chavdarova & Fleuret , 2018 ) and standard GAN ensembles ( Wang et al. , 2016 ) use several GANs in part to increase the capacity or expressibility of GANs . Other ensembles , such as Dropout-GAN ( Mordido et al. , 2018 ) , help increase robustness of the generative network . Conditional GANs ( cGANs ) . Conditional GANs ( Mirza & Osindero , 2014 ) attempt to solve the optimization problem in ( 1 ) by conditioning on the class y , a one-hot vector . The generator and discriminator both take y as an additional input . This conditioning can be implemented by having the latent variable be part of the input , e.g. , the input to the generator will be rzT yT sT instead of just z . Typically , conventional cGANs have the following architecture modification . The first layer has an additive bias that depends on the class vector y and the rest is the same . For example , consider a multilayer perceptron , with matrix W in the first layer . Converting this network to be conditional would result in the following modification to the matrix in the first layer : Wconditional „ x y “ rW Bs „ x y “ Wx ` By “ Wx ` B¨ , k . Hence , we can think ofB as a matrix with columnsB¨ , k , k P t1 , . . . , Ku being bias vectors andW being the same as before . We pick a bias vector B¨ , k based on what class we are conditioning on but the other parameters of the network are held the same , independent of k. This is done to both the generator and the discriminator . Some cGANs condition on multiple layers , such as BigGAN ( Brock et al. , 2018 ) , or on different types of layers , such as convolutional layers , but our formulation here extends clearly to those other architectures . Gaussian Mixture GANs ( GM-GANs ) . The latent distribution Pz is typically chosen to be either uniform , isotropic Gaussian , or truncated isotropic Gaussian ( Goodfellow et al. , 2014 ; Radford et al. , 2015 ; Brock et al. , 2018 ) . We are not restricted to these distributions ; research has been conducted in extending and studying the affect of using different distributions , such as a mixture of Gaussians ( Ben-Yosef & Weinshall , 2018 ; Gurumurthy et al. , 2017 ) . 3 CONTINUOUS GENERATIVE NETWORKS CAN NOT MODEL DISTRIBUTIONS DRAWN FROM DISCONNECTED DATA . 3.1 DISCONNECTED DATA MODEL . We begin by introducing a new data model that accounts for disconnected data . Typical datasets with class labels satisfy this model ; we provide additional examples below . Definition 1 ( Disconnected data model ) . We assume that the data lies on K disjoint , compact sets Xk Ă Rp , k P t1 , . . . , Ku so that the whole data lies on the disjoint union of each component : Ť ¨ Kk “ 1 Xk “ X . Moreover , we assume that each component Xk is connected ( Rudin , 1964 ) . We then draw data points from these sets in order to construct our finite datasets . In Definition 1 , we let each Xk be compact in order to remove the degenerate case of having two components Xk and Xj that are arbitrarily close to one another , which is possible if we only assume that X is closed and disjoint . If that is the case , there are trivial counter-examples ( see the appendix ) to the theorems proved below . Lemma 1 . X is a disconnected set , and Xj is disconnected from Xk for j ‰ k. Disconnected datasets are ubiquitous in machine learning ( Khayatkhoei et al. , 2018 ; Hoang et al. , 2018 ; Pandeva & Schubert , 2019 ) . For example , datasets with discrete labels ( typical in classification problems ) will often be disconnected . We study this disconnected data property , because generative networks are unable to learn the distribution supported on such a dataset , as we show below .
This paper addresses the problem that models like GANS which learn continuous mappings from a connected latent space, are incapable of producing a mapping that contains disconnected components. The authors argue that real-world classes like "badger" and "zebra" are indeed disconnected and thus the inability of GANs to learn a disconnected mapping poses a problem. The authors formalize the statement and argue instead that one should learn an ensemble of GANs. These aren't really ensembles as in classification but components of a mixture model. They are using each GAN to represent one component of a mixture distribution. They also argue that conditional GANs and GANs with mixture distributions over the latents are similar. The paper closes with interesting but insufficient experiments showing the benfits of ensembles of GANs on one dataset (CIFAR10).
SP:cfcc0751394443ee8179098ee4fec54128f668a3
Generalizing Graph Convolutional Networks via Heat Kernel
Graph convolutional networks ( GCNs ) have emerged as a powerful framework for mining and learning with graphs . A recent study shows that GCNs can be simplified as a linear model by removing nonlinearities and weight matrices across all consecutive layers , resulting the simple graph convolution ( SGC ) model . In this paper , we aim to understand GCNs and generalize SGC as a linear model via heat kernel ( HKGCN ) , which acts as a low-pass filter on graphs and enables the aggregation of information from extremely large receptive fields . We theoretically show that HKGCN is in nature a continuous propagation model and GCNs without nonlinearities ( i.e. , SGC ) are the discrete versions of it . Its low-pass filter and continuity properties facilitate the fast and smooth convergence of feature propagation . Experiments on million-scale networks show that the linear HKGCN model not only achieves consistently better results than SGC but also can match or even beat advanced GCN models , while maintaining SGC ’ s superiority in efficiency . 1 INTRODUCTION . Graph neural networks ( GNNs ) have emerged as a powerful framework for modeling structured and relational data ( Gori et al. , 2005 ; Scarselli et al. , 2008 ; Gilmer et al. , 2017 ; Kipf & Welling , 2017 ) . A wide range of graph mining tasks and applications have benefited from its recent emergence , such as node classification ( Kipf & Welling , 2017 ; Veličković et al. , 2018 ) , link inference ( Zhang & Chen , 2018 ; Ying et al. , 2018 ) , and graph classification ( Xu et al. , 2019b ) . The core procedure of GNNs is the ( discrete ) feature propagation operation , which propagates information between nodes layer by layer based on rules derived from the graph structures . Take the graph convolutional network ( GCN ) ( Kipf & Welling , 2017 ) for example , its propagation is performed through the normalized Laplacian of the input graph . Such a procedure usually involves 1 ) the non-linear feature transformation , commonly operated by the activation function such as ReLU , and 2 ) the discrete propagation layer by layer . Over the course of its development , various efforts have been devoted to advancing the propagation based architecture , such as incorporating self-attention in GAT ( Veličković et al. , 2018 ) , mixing high-order neighborhoods in MixHop ( Abu-El-Haija et al. , 2019 ) , and leveraging graphical models in GMNN ( Qu et al. , 2019 ) . Recently , Wu et al . ( Wu et al. , 2019 ) observe that the non-linear part of GCNs ’ feature propagation is actually associated with excess complexity and redundant operations . To that end , they simplify GCNs into a linear model SGC by removing all non-linearities between consecutive GCN layers . Surprisingly , SGC offers comparable or even better performance to advanced GCN models , based on which they argue that instead of the non-linear feature transformation , the repeated graph propagation may contribute the most to the expressive power of GCNs . Though interesting results generated , SGC still inherits the discrete nature of GCNs ’ propagation , which can lead to strong oscillations during the procedure . Take , for example , a simple graph of two nodes v1 and v2 with one-dimension input features x1 = 1 & x2 = 2 and one weighted edge between them , the feature updates of x1 and x2 during the GCN propagation is shown in Figure 1 ( a ) , from which we can clearly observe the oscillations of x1 and x2 step by step . This indicates that though the features from multi-hops away may seem to be taken into consideration during the GCN propagation , it is still far away to learn patterns from them . In this work , we aim to generalize GCNs into a continuous and linear propagation model , which is referred to as HKGCN . We derive inspiration from Newton ’ s law of cooling by assuming graph feature propagation follow a similar process . Straightforwardly , this leads us to leverage heat kernel for feature propagation in HKGCN . Theoretically , we show that the propagation matrix of GCNs is equivalent to the finite difference version of the heat kernel . In other words , using heat kernel as the propagation matrix will lead to smooth feature convergence . In the same example above , we show the heat kernel based propagation in HKGCN can prevent oscillations , as illustrated in Figure 1 ( b ) . Finally , from the graph spectral perspective , heat kernel acts as a low-pass filter and the cutoff frequency of heat kernel can be adjusted by changing the propagation time . Empirically , we demonstrate the performance of HKGCN for both transductive and inductive semi-supervised node classification tasks . The experiments are conducted on both traditional GNN datasets , such as Cora , CiteSeer , Pubmed , and Reddit , and latest graph benchmark data indexed by Open Graph Benchmark ( Hu et al. , 2020 ) . The results suggest that the simple and linear HKGCN model can consistently outperform SGC on all six datasets and match or even beat the performance of advanced graph neural networks on both tasks , while at the same time maintaining the order-of-magnitude efficiency superiority inherited from SGC . 2 RELATED WORK . Graph Neural Networks . Graph neural networks ( GNNs ) have emerged as a new paradigm for graph mining and learning , as significant progresses have been made in recent years . Notably , the spectral graph convolutional network ( Bruna et al. , 2013 ) is among the first to directly use back propagation to learn the kernel filter , but this has the shortcoming of high time complexity . Another work shows how to use Chebyshev polynomial approximation to fast compute the filter kernel ( Hammond et al. , 2011 ) . Attempts to further this direction leverage Chebyshev Expansion to achieve the same linear computational complexity as classical CNNs ( Defferrard et al. , 2016 ) . Later , the graph convolutional network ( GCN ) ( Kipf & Welling , 2017 ) simplifies the filter kernel to the second-order of Chebyshev Expansion , inspiring various advancements in GNNs . GAT brings the attention mechanisms into graph neural networks ( Veličković et al. , 2018 ) . GMNN combines the benefits of statistical relational learning and GNNs into a unified framework ( Qu et al. , 2019 ) . To enable fast and scalable GNN training , FastGCN interprets graph convolutions as integral transforms of features and thus uses Monte Carlo method to simulate the feature propagation step ( Chen et al. , 2018 ) . GraphSage treats the feature propagation as the aggregation from ( sampled ) neighborhoods ( Hamilton et al. , 2017 ) . LADIES ( Zou et al. , 2019 ) further introduces the layer-dependent importance sampling technique for efficient training . Recently , there are also research efforts devoting on the theoretical or deep understanding of GCNs ( Xu et al. , 2019b ; Battaglia et al. , 2018 ) . For example , the feature propagation in GNNs can be also explained as neural message passing ( Gilmer et al. , 2017 ) . In addition , studies also find that the performance of GNNs decreases with more and more layers , known as the over-smoothing issue ( Li et al. , 2018 ; Zhao & Akoglu , 2020 ) . To reduce GCNs ’ complexity , SGC turns the GCN model into a linear model by removing the non-linear activation operations between consecutive GCN layers ( Wu et al. , 2019 ) , producing promising results in terms of both efficacy and efficiency . Heat Kernel . The properties of heat kernel for graphs are reviewed in detail by Chuang in ( Chung & Graham , 1997 ) . Recently , heat kernel has been frequently used as the feature propagation modulator . In ( Kondor & Lafferty , 2002 ) , the authors show that heat kernel can be regarded as the discretization of the familiar Gaussian kernel of Euclidean space . Additionally , heat kernel is often used as the window function for windowed graph Fourier transform ( Shuman et al. , 2016 ) . In ( Zhang et al. , 2019 ) , the second-order heat kernel is used as the band-pass filter kernel to amplify local and global structural information for network representation learning . Concurrent work . Several recent works have developed similar idea . ( Poli et al. , 2020 ; Zhuang et al. , 2020 ) use the Neural ODE framework and parametrize the derivative function using a 2 or 3 layer GNN directly . ( Xhonneux et al. , 2020 ) improved ODE by developing a continuous messagepassing layer . All ODE models make feature converge to stable point by adding residual connection . In contrast , our model outputs an intermediate state of feature , which is a balance between local and global features . Some recent works ( Xu et al. , 2019a ; Klicpera et al. , 2019 ) propose to leverage heat kernel to enhance low-frequency filters and enforce a smooth feature propagation . However , they do not realize the relationship between the feature propagation of GCNs and heat kernel . 3 GENERALIZING ( SIMPLE ) GRAPH CONVOLUTION VIA HEAT KERNEL . 3.1 PROBLEM AND BACKGROUND . We focus on the problem of semi-supervised node classification on graphs , which is the same as GCN ( Kipf & Welling , 2017 ) . Without loss of generality , the input to this problem is an undirected network G = ( V , E ) , where V denotes the node set of n nodes { v1 , ... , vn } and E represents the edge set . The symmetric adjacency matrix of G is defined as A and its diagonal degree matrix as D with Dii = ∑ j Aij . For each node vi ∈ V , it is associated with a feature vector xi ∈ X ∈ Rn×d and a one-hot label vector yi ∈ Y ∈ { 0 , 1 } n×C , where C is the number of classes . The problem setting of semi-supervised graph learning is given the labels YL of a subset of nodes VL , to infer the labels YU of the remaining nodes VU , where VU = V \VL . Graph Convolutional Networks . Given the input graph G = ( V , E ) with A , D , X , and YL , GCN can be understood as feature propagation over the graph structure . Specifically , it follows the following propagation rule : H ( l+1 ) = σ ( D̃− 1 2 ÃD̃− 1 2H ( l ) W ( l ) ) , ( 1 ) where à = A + IN is the adjacency matrix with additional self-connections with IN as the identity matrix , W ( l ) is a trainable weight matrix in the lth layer , σ ( · ) is a nonlinear function such as ReLU , and H ( l ) denotes the hidden node representation in the lth layer with the first layer H ( 0 ) = X . The essence of GCN is that each GCN layer is equivalent to the first-order Chebyshev expansion of spectral convolution ( Kipf & Welling , 2017 ) . It also assumes that the first-order coefficient a1 is equal to the 0-th order coefficient a0 multiplied by −1 , i.e. , a1 = −a0 . We will later prove that this is just a discrete solution of heat equation . Simple Graph Convolution . Since its inception , GCNs have drawn tremendous attention from researchers ( Chen et al. , 2018 ; Veličković et al. , 2018 ; Qu et al. , 2019 ) . A recent study shows that GCNs can be simplified as the Simple Graph Convolution ( SGC ) model by simply removing the nonlinearities between GCN layers ( Wu et al. , 2019 ) . Specifically , the SGC model is a linear model and can be formalized by the following propagation rule : Y = softmax ( ( D̃− 1 2 ÃD̃− 1 2 ) K XW ) ( 2 ) Surprisingly , the linear SGC model yields comparable prediction accuracy to the sophisticated GCN models in various downstream tasks , with significant advantages in efficiency and scalability due to its simplicity . Heat Equation and Heat Kernel . The heat equation , as a special case of the diffusion equation , is used to describe how heat distributes and flows over time ( Widder & Vernon , 1976 ) . Image a scenario of graph , in which each node has a temperature and heat energy could only transfer along the edge between connected nodes , and the heat propagation on this graph follows Newton ’ s law of cooling . So the heat propagation between node vi and node vj should be proportional to 1 ) the edge weight and 2 ) the temperature difference between vi and vj . Let x ( t ) i denote the temperature of vi at time t , the heat diffusion on graph G can be described by the following heat equation : dx ( t ) i dt = −k ∑ j Aij ( x ( t ) i − x ( t ) j ) = −k [ Diix ( t ) i − ∑ j Aijx ( t ) j ] . ( 3 ) The equation under the matrix form is dX ( t ) dt = −kLX ( t ) , where L = D−A is the graph Laplacian matrix . By reparameterizing t and k into a single term t′ = kt , the equation can be rewritten as : dX ( t ′ ) dt′ = −LX ( t ′ ) ( 4 ) A heat kernel is the fundamental solution of the heat equation ( Chung & Graham , 1997 ) . The heat kernel Ht is defined to be the n× n matrix : Ht = e −Lt ( 5 ) Given the initial status X ( 0 ) = X , the solution to the heat equation in Eq . 4 can be written as X ( t ) = HtX ( 6 ) Naturally , the heat kernel can be used as the feature propagation matrix in GCNs .
This submission introduced a new graph convolutional operator based on heat diffusion, named heat kernel GCN (HKGCN). First, continuous-time heat diffusion on graphs is reviewed, where the solution is given by the heat equation (6). Then, the authors showed that classical GCN can be approximated in the same formulation through discretization.
SP:e01809728bbe427d7b505f3f36d51b55a8e49d49
Generalizing Graph Convolutional Networks via Heat Kernel
Graph convolutional networks ( GCNs ) have emerged as a powerful framework for mining and learning with graphs . A recent study shows that GCNs can be simplified as a linear model by removing nonlinearities and weight matrices across all consecutive layers , resulting the simple graph convolution ( SGC ) model . In this paper , we aim to understand GCNs and generalize SGC as a linear model via heat kernel ( HKGCN ) , which acts as a low-pass filter on graphs and enables the aggregation of information from extremely large receptive fields . We theoretically show that HKGCN is in nature a continuous propagation model and GCNs without nonlinearities ( i.e. , SGC ) are the discrete versions of it . Its low-pass filter and continuity properties facilitate the fast and smooth convergence of feature propagation . Experiments on million-scale networks show that the linear HKGCN model not only achieves consistently better results than SGC but also can match or even beat advanced GCN models , while maintaining SGC ’ s superiority in efficiency . 1 INTRODUCTION . Graph neural networks ( GNNs ) have emerged as a powerful framework for modeling structured and relational data ( Gori et al. , 2005 ; Scarselli et al. , 2008 ; Gilmer et al. , 2017 ; Kipf & Welling , 2017 ) . A wide range of graph mining tasks and applications have benefited from its recent emergence , such as node classification ( Kipf & Welling , 2017 ; Veličković et al. , 2018 ) , link inference ( Zhang & Chen , 2018 ; Ying et al. , 2018 ) , and graph classification ( Xu et al. , 2019b ) . The core procedure of GNNs is the ( discrete ) feature propagation operation , which propagates information between nodes layer by layer based on rules derived from the graph structures . Take the graph convolutional network ( GCN ) ( Kipf & Welling , 2017 ) for example , its propagation is performed through the normalized Laplacian of the input graph . Such a procedure usually involves 1 ) the non-linear feature transformation , commonly operated by the activation function such as ReLU , and 2 ) the discrete propagation layer by layer . Over the course of its development , various efforts have been devoted to advancing the propagation based architecture , such as incorporating self-attention in GAT ( Veličković et al. , 2018 ) , mixing high-order neighborhoods in MixHop ( Abu-El-Haija et al. , 2019 ) , and leveraging graphical models in GMNN ( Qu et al. , 2019 ) . Recently , Wu et al . ( Wu et al. , 2019 ) observe that the non-linear part of GCNs ’ feature propagation is actually associated with excess complexity and redundant operations . To that end , they simplify GCNs into a linear model SGC by removing all non-linearities between consecutive GCN layers . Surprisingly , SGC offers comparable or even better performance to advanced GCN models , based on which they argue that instead of the non-linear feature transformation , the repeated graph propagation may contribute the most to the expressive power of GCNs . Though interesting results generated , SGC still inherits the discrete nature of GCNs ’ propagation , which can lead to strong oscillations during the procedure . Take , for example , a simple graph of two nodes v1 and v2 with one-dimension input features x1 = 1 & x2 = 2 and one weighted edge between them , the feature updates of x1 and x2 during the GCN propagation is shown in Figure 1 ( a ) , from which we can clearly observe the oscillations of x1 and x2 step by step . This indicates that though the features from multi-hops away may seem to be taken into consideration during the GCN propagation , it is still far away to learn patterns from them . In this work , we aim to generalize GCNs into a continuous and linear propagation model , which is referred to as HKGCN . We derive inspiration from Newton ’ s law of cooling by assuming graph feature propagation follow a similar process . Straightforwardly , this leads us to leverage heat kernel for feature propagation in HKGCN . Theoretically , we show that the propagation matrix of GCNs is equivalent to the finite difference version of the heat kernel . In other words , using heat kernel as the propagation matrix will lead to smooth feature convergence . In the same example above , we show the heat kernel based propagation in HKGCN can prevent oscillations , as illustrated in Figure 1 ( b ) . Finally , from the graph spectral perspective , heat kernel acts as a low-pass filter and the cutoff frequency of heat kernel can be adjusted by changing the propagation time . Empirically , we demonstrate the performance of HKGCN for both transductive and inductive semi-supervised node classification tasks . The experiments are conducted on both traditional GNN datasets , such as Cora , CiteSeer , Pubmed , and Reddit , and latest graph benchmark data indexed by Open Graph Benchmark ( Hu et al. , 2020 ) . The results suggest that the simple and linear HKGCN model can consistently outperform SGC on all six datasets and match or even beat the performance of advanced graph neural networks on both tasks , while at the same time maintaining the order-of-magnitude efficiency superiority inherited from SGC . 2 RELATED WORK . Graph Neural Networks . Graph neural networks ( GNNs ) have emerged as a new paradigm for graph mining and learning , as significant progresses have been made in recent years . Notably , the spectral graph convolutional network ( Bruna et al. , 2013 ) is among the first to directly use back propagation to learn the kernel filter , but this has the shortcoming of high time complexity . Another work shows how to use Chebyshev polynomial approximation to fast compute the filter kernel ( Hammond et al. , 2011 ) . Attempts to further this direction leverage Chebyshev Expansion to achieve the same linear computational complexity as classical CNNs ( Defferrard et al. , 2016 ) . Later , the graph convolutional network ( GCN ) ( Kipf & Welling , 2017 ) simplifies the filter kernel to the second-order of Chebyshev Expansion , inspiring various advancements in GNNs . GAT brings the attention mechanisms into graph neural networks ( Veličković et al. , 2018 ) . GMNN combines the benefits of statistical relational learning and GNNs into a unified framework ( Qu et al. , 2019 ) . To enable fast and scalable GNN training , FastGCN interprets graph convolutions as integral transforms of features and thus uses Monte Carlo method to simulate the feature propagation step ( Chen et al. , 2018 ) . GraphSage treats the feature propagation as the aggregation from ( sampled ) neighborhoods ( Hamilton et al. , 2017 ) . LADIES ( Zou et al. , 2019 ) further introduces the layer-dependent importance sampling technique for efficient training . Recently , there are also research efforts devoting on the theoretical or deep understanding of GCNs ( Xu et al. , 2019b ; Battaglia et al. , 2018 ) . For example , the feature propagation in GNNs can be also explained as neural message passing ( Gilmer et al. , 2017 ) . In addition , studies also find that the performance of GNNs decreases with more and more layers , known as the over-smoothing issue ( Li et al. , 2018 ; Zhao & Akoglu , 2020 ) . To reduce GCNs ’ complexity , SGC turns the GCN model into a linear model by removing the non-linear activation operations between consecutive GCN layers ( Wu et al. , 2019 ) , producing promising results in terms of both efficacy and efficiency . Heat Kernel . The properties of heat kernel for graphs are reviewed in detail by Chuang in ( Chung & Graham , 1997 ) . Recently , heat kernel has been frequently used as the feature propagation modulator . In ( Kondor & Lafferty , 2002 ) , the authors show that heat kernel can be regarded as the discretization of the familiar Gaussian kernel of Euclidean space . Additionally , heat kernel is often used as the window function for windowed graph Fourier transform ( Shuman et al. , 2016 ) . In ( Zhang et al. , 2019 ) , the second-order heat kernel is used as the band-pass filter kernel to amplify local and global structural information for network representation learning . Concurrent work . Several recent works have developed similar idea . ( Poli et al. , 2020 ; Zhuang et al. , 2020 ) use the Neural ODE framework and parametrize the derivative function using a 2 or 3 layer GNN directly . ( Xhonneux et al. , 2020 ) improved ODE by developing a continuous messagepassing layer . All ODE models make feature converge to stable point by adding residual connection . In contrast , our model outputs an intermediate state of feature , which is a balance between local and global features . Some recent works ( Xu et al. , 2019a ; Klicpera et al. , 2019 ) propose to leverage heat kernel to enhance low-frequency filters and enforce a smooth feature propagation . However , they do not realize the relationship between the feature propagation of GCNs and heat kernel . 3 GENERALIZING ( SIMPLE ) GRAPH CONVOLUTION VIA HEAT KERNEL . 3.1 PROBLEM AND BACKGROUND . We focus on the problem of semi-supervised node classification on graphs , which is the same as GCN ( Kipf & Welling , 2017 ) . Without loss of generality , the input to this problem is an undirected network G = ( V , E ) , where V denotes the node set of n nodes { v1 , ... , vn } and E represents the edge set . The symmetric adjacency matrix of G is defined as A and its diagonal degree matrix as D with Dii = ∑ j Aij . For each node vi ∈ V , it is associated with a feature vector xi ∈ X ∈ Rn×d and a one-hot label vector yi ∈ Y ∈ { 0 , 1 } n×C , where C is the number of classes . The problem setting of semi-supervised graph learning is given the labels YL of a subset of nodes VL , to infer the labels YU of the remaining nodes VU , where VU = V \VL . Graph Convolutional Networks . Given the input graph G = ( V , E ) with A , D , X , and YL , GCN can be understood as feature propagation over the graph structure . Specifically , it follows the following propagation rule : H ( l+1 ) = σ ( D̃− 1 2 ÃD̃− 1 2H ( l ) W ( l ) ) , ( 1 ) where à = A + IN is the adjacency matrix with additional self-connections with IN as the identity matrix , W ( l ) is a trainable weight matrix in the lth layer , σ ( · ) is a nonlinear function such as ReLU , and H ( l ) denotes the hidden node representation in the lth layer with the first layer H ( 0 ) = X . The essence of GCN is that each GCN layer is equivalent to the first-order Chebyshev expansion of spectral convolution ( Kipf & Welling , 2017 ) . It also assumes that the first-order coefficient a1 is equal to the 0-th order coefficient a0 multiplied by −1 , i.e. , a1 = −a0 . We will later prove that this is just a discrete solution of heat equation . Simple Graph Convolution . Since its inception , GCNs have drawn tremendous attention from researchers ( Chen et al. , 2018 ; Veličković et al. , 2018 ; Qu et al. , 2019 ) . A recent study shows that GCNs can be simplified as the Simple Graph Convolution ( SGC ) model by simply removing the nonlinearities between GCN layers ( Wu et al. , 2019 ) . Specifically , the SGC model is a linear model and can be formalized by the following propagation rule : Y = softmax ( ( D̃− 1 2 ÃD̃− 1 2 ) K XW ) ( 2 ) Surprisingly , the linear SGC model yields comparable prediction accuracy to the sophisticated GCN models in various downstream tasks , with significant advantages in efficiency and scalability due to its simplicity . Heat Equation and Heat Kernel . The heat equation , as a special case of the diffusion equation , is used to describe how heat distributes and flows over time ( Widder & Vernon , 1976 ) . Image a scenario of graph , in which each node has a temperature and heat energy could only transfer along the edge between connected nodes , and the heat propagation on this graph follows Newton ’ s law of cooling . So the heat propagation between node vi and node vj should be proportional to 1 ) the edge weight and 2 ) the temperature difference between vi and vj . Let x ( t ) i denote the temperature of vi at time t , the heat diffusion on graph G can be described by the following heat equation : dx ( t ) i dt = −k ∑ j Aij ( x ( t ) i − x ( t ) j ) = −k [ Diix ( t ) i − ∑ j Aijx ( t ) j ] . ( 3 ) The equation under the matrix form is dX ( t ) dt = −kLX ( t ) , where L = D−A is the graph Laplacian matrix . By reparameterizing t and k into a single term t′ = kt , the equation can be rewritten as : dX ( t ′ ) dt′ = −LX ( t ′ ) ( 4 ) A heat kernel is the fundamental solution of the heat equation ( Chung & Graham , 1997 ) . The heat kernel Ht is defined to be the n× n matrix : Ht = e −Lt ( 5 ) Given the initial status X ( 0 ) = X , the solution to the heat equation in Eq . 4 can be written as X ( t ) = HtX ( 6 ) Naturally , the heat kernel can be used as the feature propagation matrix in GCNs .
This paper studies semi-supervised node classification in graph data. One powerful approach to the task is graph convolutional networks, which use discrete layers to perform information propagation. The paper generalizes GCNs into a continuous model via heat kernel, where the proposed model uses continuous layers for information propagation. The authors conduct both theoretical and empirical analysis of the proposed model. Experiments on several standard datasets show promising results. Overall, the paper studies an important problem in graph machine learning, and proposes a principled approach, which combines graph neural networks with heat kernels, and gives a new way of analyzing existing graph neural networks.
SP:e01809728bbe427d7b505f3f36d51b55a8e49d49
A Unified Paths Perspective for Pruning at Initialization
1 INTRODUCTION . A wealth of recent work has been dedicated to characterizing the training dynamics and generalization bounds of neural networks under a linearized approximation of the network depending on its parameters at initialization ( Jacot et al. , 2018 ; Arora et al. , 2019 ; Lee et al. , 2019a ; Woodworth et al. , 2020 ) . This approach makes use of the Neural Tangent Kernel , and under infinite width assumptions , the training dynamics of gradient descent over the network become analytically tractable . In this paper , we make use of the Neural Tangent Kernel theory with the goal of approximating the effects of various initialization pruning methods on the resulting training dynamics and performance of the network . Focusing on networks with homogeneous activation functions ( ReLU , Leaky-ReLU , Linear ) , we introduce a novel decomposition of the Neural Tangent Kernel which separates the effects of network architecture from effects due to the data on the training dynamics of the network . We find the data-independent factor of the Neural Tangent Kernel to have a particularly nice structure as a symmetric matrix representing the covariance of path values in the network which we term the Path Kernel . We subsequently show that the Path Kernel offers a data-independent approximation of the network ’ s convergence dynamics during training . To validate the empirical benefits of this theoretical approach , we turn to the problem of pruning at initialization . While the problem of optimally pruning deep networks is nearly as old as deep networks themselves ( Reed , 1993 ) , interest in this problem has experienced a revival in recent years . This revival is likely a product of a number of underlying factors , but much of the recent interest could be ascribed to the Lottery Ticket Hypothesis ( Frankle & Carbin , 2018 ) which states that sparse , trainable networks–which achieve task performance that matches or exceeds those of its dense counterparts–can be found at initialization . The Lottery Ticket Hypothesis implies that the over-parameterization of neural networks is incidental in finding a trainable solution , the topology of which often exists at initialization . However , finding these lottery ticket networks currently requires some amount of iterative re-training of the network at increasing levels of sparsity which is inefficient and difficult to analyze theoretically . The resurgence of interest in optimal pruning has spurred the development of a number of recent approaches for pruning deep neural networks at initialization ( Lee et al. , 2019b ; Liu & Zenke , 2020 ; Wang et al. , 2020 ; Tanaka et al. , 2020 ) in supervised , semi-supervised , and unsupervised settings , borrowing theoretical motivation from linearized training dynamics ( Jacot et al. , 2018 ) , mean-field isometry ( Saxe et al. , 2013 ) , and saliency ( Dhamdhere et al. , 2019 ) . While each of these methods have their own theoretical motivations , little work has been dedicated to formally describing the effect of these pruning methods on the expected performance of the pruned network . Also , the diversity in theoretical motivations that give rise to these pruning methods makes it difficult to observe their similarities . In this paper , we observe that a number of initialization pruning approaches are implicitly dependent on the path covariance structure captured by the Path Kernel which , in turn , affects the network ’ s training dynamics . We show that we can approximate these training dynamics in general , and our approximation results for a number of initialization pruning approaches suggests that it is possible to estimate , prior to training , the efficacy of a particular initialization pruning approach on a given architecture by investigating the eigenstructure of its Path Kernel . Motivated by our theoretical results and the unification of a number of initialization pruning methods in this Path Kernel framework , we investigate the close relationship between the SynFlow ( Tanaka et al. , 2020 ) pruning approach and our path decomposition . This leads to our suggestion of two new initialization pruning approaches which we predict to perform well under various assumptions on the stability of the Path Kernel and the input distribution of the data . We then validate these predictions empirically by comparing the performance of these pruning approaches across a number of network architectures . The insights on initialization pruning provided by the Path Kernel decomposition are only one of a number of potential application domains which could benefit from this path-centric framework . Importantly , the coviariance structure over paths encoded by the Path Kernel is general and may be computed at any point in time , not just at initialization . We anticipate that this representation will provide insight into other application areas like model interpretation , model comparison , or transfer learning across domains . The sections of the paper proceed as follows . We start with a brief introduction to the Neural Tangent Kernel in Section 2 before introducing the Path Kernel decomposition in Section 3 and its relationship to approximations of network convergence properties . In Section 4 , we reformulate in this path framework three popular initialization pruning approaches and introduce two additional initialization pruning approaches inspired by this path decomposition . We validate these convergence approximations and the behavior of these pruning approaches in Section 5 and conclude with a discussion of the results and opportunities for future work . 2 THE NEURAL TANGENT KERNEL . Recent work by Jacot et al . ( 2018 ) has shown that the exact dynamics of infinite-width network outputs through gradient descent training corresponds to kernel gradient descent in function space with respect the Neural Tangent Kernel . More formally , for a neural network f parameterized by θ and loss function ` : RK × RK → R , let L = ∑ ( x∈X , y∈Y ) ` ( ft ( x , θ ) , y ) denote the empirical loss function . Here , X is the training set , Y is the associated set of class labels . For multiple inputs , denote f ( X , θ ) ∈ RNK the outputs of the network where K is the output dimension and N is the number of training examples . In continuous-time gradient descent , the evolution of parameters and outputs can be expressed as θ̇t = −η∇θf ( X , θt ) ᵀ∇f ( X , θt ) L ( 1 ) ḟ ( X , θt ) = ∇θf ( X , θt ) θ̇t = −ηΘt ( X , X ) ∇f ( X , θ ) L ( 2 ) where the matrix Θt ( X , X ) ∈ RNK×NK is the Neural Tangent Kernel at time step t , defined as the covariance structure of the Jacobian of the parameters over all training samples : Θt ( X , X ) = ∇θf ( X , θt ) ∇θf ( X , θt ) ᵀ . ( 3 ) For infinitely wide networks , the NTK exactly captures the output space dynamics through training , and Θt ( X , X ) remains constant throughout . Lee et al . ( 2019a ) have shown that neural networks of any depth tend to follow the linearized training dynamics as predicted by the NTK . Moreover , we can approximate the outputs of the network linearly through a one-step Taylor expansion given input x as : f− ( x , θt ) = f ( x , θ0 ) +∇θf ( x , θ0 ) ωt ( 4 ) where ωt = θt − θ0 is the change in parameters from their initial values . The first term of Equation 4 is constant while the second term captures the dynamics of the initial outputs during training . Substituting f− for f in Equations 1 and 2 , the dynamics of the linearized gradient flow become ω̇t = −η∇θf ( X , θ0 ) ᵀ∇f− ( X , θt ) L ( 5 ) ḟ− ( x , θt ) = −ηΘ ( x , X ) ∇f− ( X , θt ) L ( 6 ) Under MSE loss , the above ODEs have closed-form solutions as ωt = −∇θf ( X , θ0 ) ᵀΘ−10 ( I − e−ηΘ0t ) ( f ( X , θ0 ) −Y ) ( 7 ) f− ( X , θt ) = ( I − e−ηΘ0t ) Y + e−ηΘ0tf ( X , θ0 ) . ( 8 ) In other words , through the tangent kernel and initial outputs of the network , we can compute the training convergence of a linearized neural network before running any gradient descent steps . We will show in Section 3 that , through the Path Kernel , we can reliably approximate the linearized convergence rate of the network in the absence of data and without computing the full NTK . 3 THE PATH KERNEL . We will now provide a reformulation of the output behavior of networks in terms of activated paths and their values . This reformulation provides access to a unique decomposition of the NTK that separates the data-dependent output dynamics of the network from those dependent on the architecture and initialization . This reformulation of network behavior in terms of active paths is motivated by Meng et al . ( 2019 ) wherein the authors show that gradient descent on networks with homogenous activations may be computed completely in path space . The decomposition provided in this section makes explicit the relationships between the pruning-at-initialization based approaches described in Section 4 and allows for estimation of the convergence behavior of networks at initialization and how pruning affects this behavior . Let θ ∈ Rm denote the network parameters in vector form and let x ∈ Rd denote an input . Assume the network has K output nodes . We define a path from input to output as a binary vector p such that pj = 1 when θj is an edge along path p , otherwise pj = 0 . Denote by P the enumerable set of all paths and Ps→k the subset of paths that go from input node s to output node k. Let P = |P| be the number of paths . Given the enumeration in P , we will abuse notation slightly and refer to paths p by their index p in P . The value of a path p can be calculated as a product of parameters with binary exponents vp ( θ ) = ∏m j=1 θ pj j such that vp is the product along the θ weights that are in path p. The activation status of a path p is ap ( x , θ ) = ∏ { j | pj=1 } I ( opj ( x , θ ) > 0 ) . Here opj is the output of the hidden node which path pj passes through immediately after parameter j . We can then define the output of the network at node k as fk ( x , θ ) = d∑ s=1 ∑ p∈Ps→k vp ( θ ) ap ( x , θ ) xs . ( 9 ) The derivative of ap ( x , θ ) with respect to θ will goes to zero in expectation . Using the chain rule , we can break the derivative of the output of the network with respect to θ into two parts , the partial derivative of the output with respect to the path values and the partial derivative of the path values with respect to the parameters : ∇θf ( x , θ ) = ∂f ( x , θ ) ∂θ = ∂f ( x , θ ) ∂v ( θ ) ∂v ( θ ) ∂θ = Jfv ( x ) J v θ . ( 10 ) Note the inner product structure formed over paths as ∂f ( x , θ ) ∂v ( θ ) = J f v ( x ) ∈ RK×P and ∂v ( θ ) ∂θ = Jvθ ∈ RP×m . The change in output with respect to path values now only depends on the activation status of each path leading to the output and the input to that path . Each entry of this matrix has value ( Jfv ( x ) ) k , p = ( ∂f ( x , θ ) ∂v ( θ ) ) k , p = d∑ s=1 I ( p ∈ Ps→k ) ap ( x , θ ) xs . Similarly , the change in path values with respect to parameters is a function of only the parameters and their relational structure through paths : ( Jvθ ) p , j = { vp θj pj = 1 0 otherwise . Given this reparameterization of the output of the network in terms of activated paths and their values , we see the NTK similarly decomposes nicely along this structure : Θ ( x , x ) = ∇θf ( x , θ ) ∇θf ( x , θ ) ᵀ = Jfv ( x ) J v θ ( J v θ ) ᵀJfv ( x ) ᵀ = Jfv ( x ) ΠθJ f v ( x ) ᵀ where Πθ = Jvθ J v θ ᵀ is the Path Kernel . The Path Kernel Πθ is positive semidefinite with maximal values on the diagonal Πθ ( p , p ) = ∑m j=1 ( vp ( θ ) θj ) 2 and off-diagonal elements Πθ ( p , p′ ) =∑m j=1 ( vp ( θ ) θj ) ( vp′ ( θ ) θj ) . We can view Πθ as a covariance matrix on the weighted paths defined by the network architecture and parameter initialization . Note that Jfv ( x ) entirely captures the dependence of f on the input by choosing which paths are active and re-weighting by the input , while Πθ is completely determined by the architecture and initialization . Therefore , we can expand our one-sample NTK to the entire training set through the appropriate expansion of dimensions such that Jfv ( X ) ∈ RNK×P . In the following section , we will show that the Path Kernel decomposition of the NTK allows us to approximate , at initialization , the convergence behavior of the network during training . Additionally , we find we can compute the trace of the path kernel efficiently through an implicit computation over the parameter gradients of a particular loss function . This trace computation serves as an approximation to full eigenstructure of the NTK .
The paper studies the problem of neural network pruning at initialization through the lens of neural tangent kernels (NTK). As a result, the paper delivers a unified perspective on SNIP, GRASP, and SynFlow. Based on the framework, the paper provides a method to approximate the convergence dynamics of pruned models. The paper also motivates a SynFlow-variant from the theoretical framework.
SP:c1d77b1cc1c26d9d596eea8d19b4b3607eb218b9
A Unified Paths Perspective for Pruning at Initialization
1 INTRODUCTION . A wealth of recent work has been dedicated to characterizing the training dynamics and generalization bounds of neural networks under a linearized approximation of the network depending on its parameters at initialization ( Jacot et al. , 2018 ; Arora et al. , 2019 ; Lee et al. , 2019a ; Woodworth et al. , 2020 ) . This approach makes use of the Neural Tangent Kernel , and under infinite width assumptions , the training dynamics of gradient descent over the network become analytically tractable . In this paper , we make use of the Neural Tangent Kernel theory with the goal of approximating the effects of various initialization pruning methods on the resulting training dynamics and performance of the network . Focusing on networks with homogeneous activation functions ( ReLU , Leaky-ReLU , Linear ) , we introduce a novel decomposition of the Neural Tangent Kernel which separates the effects of network architecture from effects due to the data on the training dynamics of the network . We find the data-independent factor of the Neural Tangent Kernel to have a particularly nice structure as a symmetric matrix representing the covariance of path values in the network which we term the Path Kernel . We subsequently show that the Path Kernel offers a data-independent approximation of the network ’ s convergence dynamics during training . To validate the empirical benefits of this theoretical approach , we turn to the problem of pruning at initialization . While the problem of optimally pruning deep networks is nearly as old as deep networks themselves ( Reed , 1993 ) , interest in this problem has experienced a revival in recent years . This revival is likely a product of a number of underlying factors , but much of the recent interest could be ascribed to the Lottery Ticket Hypothesis ( Frankle & Carbin , 2018 ) which states that sparse , trainable networks–which achieve task performance that matches or exceeds those of its dense counterparts–can be found at initialization . The Lottery Ticket Hypothesis implies that the over-parameterization of neural networks is incidental in finding a trainable solution , the topology of which often exists at initialization . However , finding these lottery ticket networks currently requires some amount of iterative re-training of the network at increasing levels of sparsity which is inefficient and difficult to analyze theoretically . The resurgence of interest in optimal pruning has spurred the development of a number of recent approaches for pruning deep neural networks at initialization ( Lee et al. , 2019b ; Liu & Zenke , 2020 ; Wang et al. , 2020 ; Tanaka et al. , 2020 ) in supervised , semi-supervised , and unsupervised settings , borrowing theoretical motivation from linearized training dynamics ( Jacot et al. , 2018 ) , mean-field isometry ( Saxe et al. , 2013 ) , and saliency ( Dhamdhere et al. , 2019 ) . While each of these methods have their own theoretical motivations , little work has been dedicated to formally describing the effect of these pruning methods on the expected performance of the pruned network . Also , the diversity in theoretical motivations that give rise to these pruning methods makes it difficult to observe their similarities . In this paper , we observe that a number of initialization pruning approaches are implicitly dependent on the path covariance structure captured by the Path Kernel which , in turn , affects the network ’ s training dynamics . We show that we can approximate these training dynamics in general , and our approximation results for a number of initialization pruning approaches suggests that it is possible to estimate , prior to training , the efficacy of a particular initialization pruning approach on a given architecture by investigating the eigenstructure of its Path Kernel . Motivated by our theoretical results and the unification of a number of initialization pruning methods in this Path Kernel framework , we investigate the close relationship between the SynFlow ( Tanaka et al. , 2020 ) pruning approach and our path decomposition . This leads to our suggestion of two new initialization pruning approaches which we predict to perform well under various assumptions on the stability of the Path Kernel and the input distribution of the data . We then validate these predictions empirically by comparing the performance of these pruning approaches across a number of network architectures . The insights on initialization pruning provided by the Path Kernel decomposition are only one of a number of potential application domains which could benefit from this path-centric framework . Importantly , the coviariance structure over paths encoded by the Path Kernel is general and may be computed at any point in time , not just at initialization . We anticipate that this representation will provide insight into other application areas like model interpretation , model comparison , or transfer learning across domains . The sections of the paper proceed as follows . We start with a brief introduction to the Neural Tangent Kernel in Section 2 before introducing the Path Kernel decomposition in Section 3 and its relationship to approximations of network convergence properties . In Section 4 , we reformulate in this path framework three popular initialization pruning approaches and introduce two additional initialization pruning approaches inspired by this path decomposition . We validate these convergence approximations and the behavior of these pruning approaches in Section 5 and conclude with a discussion of the results and opportunities for future work . 2 THE NEURAL TANGENT KERNEL . Recent work by Jacot et al . ( 2018 ) has shown that the exact dynamics of infinite-width network outputs through gradient descent training corresponds to kernel gradient descent in function space with respect the Neural Tangent Kernel . More formally , for a neural network f parameterized by θ and loss function ` : RK × RK → R , let L = ∑ ( x∈X , y∈Y ) ` ( ft ( x , θ ) , y ) denote the empirical loss function . Here , X is the training set , Y is the associated set of class labels . For multiple inputs , denote f ( X , θ ) ∈ RNK the outputs of the network where K is the output dimension and N is the number of training examples . In continuous-time gradient descent , the evolution of parameters and outputs can be expressed as θ̇t = −η∇θf ( X , θt ) ᵀ∇f ( X , θt ) L ( 1 ) ḟ ( X , θt ) = ∇θf ( X , θt ) θ̇t = −ηΘt ( X , X ) ∇f ( X , θ ) L ( 2 ) where the matrix Θt ( X , X ) ∈ RNK×NK is the Neural Tangent Kernel at time step t , defined as the covariance structure of the Jacobian of the parameters over all training samples : Θt ( X , X ) = ∇θf ( X , θt ) ∇θf ( X , θt ) ᵀ . ( 3 ) For infinitely wide networks , the NTK exactly captures the output space dynamics through training , and Θt ( X , X ) remains constant throughout . Lee et al . ( 2019a ) have shown that neural networks of any depth tend to follow the linearized training dynamics as predicted by the NTK . Moreover , we can approximate the outputs of the network linearly through a one-step Taylor expansion given input x as : f− ( x , θt ) = f ( x , θ0 ) +∇θf ( x , θ0 ) ωt ( 4 ) where ωt = θt − θ0 is the change in parameters from their initial values . The first term of Equation 4 is constant while the second term captures the dynamics of the initial outputs during training . Substituting f− for f in Equations 1 and 2 , the dynamics of the linearized gradient flow become ω̇t = −η∇θf ( X , θ0 ) ᵀ∇f− ( X , θt ) L ( 5 ) ḟ− ( x , θt ) = −ηΘ ( x , X ) ∇f− ( X , θt ) L ( 6 ) Under MSE loss , the above ODEs have closed-form solutions as ωt = −∇θf ( X , θ0 ) ᵀΘ−10 ( I − e−ηΘ0t ) ( f ( X , θ0 ) −Y ) ( 7 ) f− ( X , θt ) = ( I − e−ηΘ0t ) Y + e−ηΘ0tf ( X , θ0 ) . ( 8 ) In other words , through the tangent kernel and initial outputs of the network , we can compute the training convergence of a linearized neural network before running any gradient descent steps . We will show in Section 3 that , through the Path Kernel , we can reliably approximate the linearized convergence rate of the network in the absence of data and without computing the full NTK . 3 THE PATH KERNEL . We will now provide a reformulation of the output behavior of networks in terms of activated paths and their values . This reformulation provides access to a unique decomposition of the NTK that separates the data-dependent output dynamics of the network from those dependent on the architecture and initialization . This reformulation of network behavior in terms of active paths is motivated by Meng et al . ( 2019 ) wherein the authors show that gradient descent on networks with homogenous activations may be computed completely in path space . The decomposition provided in this section makes explicit the relationships between the pruning-at-initialization based approaches described in Section 4 and allows for estimation of the convergence behavior of networks at initialization and how pruning affects this behavior . Let θ ∈ Rm denote the network parameters in vector form and let x ∈ Rd denote an input . Assume the network has K output nodes . We define a path from input to output as a binary vector p such that pj = 1 when θj is an edge along path p , otherwise pj = 0 . Denote by P the enumerable set of all paths and Ps→k the subset of paths that go from input node s to output node k. Let P = |P| be the number of paths . Given the enumeration in P , we will abuse notation slightly and refer to paths p by their index p in P . The value of a path p can be calculated as a product of parameters with binary exponents vp ( θ ) = ∏m j=1 θ pj j such that vp is the product along the θ weights that are in path p. The activation status of a path p is ap ( x , θ ) = ∏ { j | pj=1 } I ( opj ( x , θ ) > 0 ) . Here opj is the output of the hidden node which path pj passes through immediately after parameter j . We can then define the output of the network at node k as fk ( x , θ ) = d∑ s=1 ∑ p∈Ps→k vp ( θ ) ap ( x , θ ) xs . ( 9 ) The derivative of ap ( x , θ ) with respect to θ will goes to zero in expectation . Using the chain rule , we can break the derivative of the output of the network with respect to θ into two parts , the partial derivative of the output with respect to the path values and the partial derivative of the path values with respect to the parameters : ∇θf ( x , θ ) = ∂f ( x , θ ) ∂θ = ∂f ( x , θ ) ∂v ( θ ) ∂v ( θ ) ∂θ = Jfv ( x ) J v θ . ( 10 ) Note the inner product structure formed over paths as ∂f ( x , θ ) ∂v ( θ ) = J f v ( x ) ∈ RK×P and ∂v ( θ ) ∂θ = Jvθ ∈ RP×m . The change in output with respect to path values now only depends on the activation status of each path leading to the output and the input to that path . Each entry of this matrix has value ( Jfv ( x ) ) k , p = ( ∂f ( x , θ ) ∂v ( θ ) ) k , p = d∑ s=1 I ( p ∈ Ps→k ) ap ( x , θ ) xs . Similarly , the change in path values with respect to parameters is a function of only the parameters and their relational structure through paths : ( Jvθ ) p , j = { vp θj pj = 1 0 otherwise . Given this reparameterization of the output of the network in terms of activated paths and their values , we see the NTK similarly decomposes nicely along this structure : Θ ( x , x ) = ∇θf ( x , θ ) ∇θf ( x , θ ) ᵀ = Jfv ( x ) J v θ ( J v θ ) ᵀJfv ( x ) ᵀ = Jfv ( x ) ΠθJ f v ( x ) ᵀ where Πθ = Jvθ J v θ ᵀ is the Path Kernel . The Path Kernel Πθ is positive semidefinite with maximal values on the diagonal Πθ ( p , p ) = ∑m j=1 ( vp ( θ ) θj ) 2 and off-diagonal elements Πθ ( p , p′ ) =∑m j=1 ( vp ( θ ) θj ) ( vp′ ( θ ) θj ) . We can view Πθ as a covariance matrix on the weighted paths defined by the network architecture and parameter initialization . Note that Jfv ( x ) entirely captures the dependence of f on the input by choosing which paths are active and re-weighting by the input , while Πθ is completely determined by the architecture and initialization . Therefore , we can expand our one-sample NTK to the entire training set through the appropriate expansion of dimensions such that Jfv ( X ) ∈ RNK×P . In the following section , we will show that the Path Kernel decomposition of the NTK allows us to approximate , at initialization , the convergence behavior of the network during training . Additionally , we find we can compute the trace of the path kernel efficiently through an implicit computation over the parameter gradients of a particular loss function . This trace computation serves as an approximation to full eigenstructure of the NTK .
In this paper, the authors propose a new kernel named Path Kernel to understand deep neural network training. The key idea is to reparameterize the network with respect to the active path in the network. In this way, they can decompose the Tangent Kernel into data-dependent and architecture-dependent pieces. They authors rewrite the formulations of the existing pruning at initialization methods with respect to their Path Kernel and provide some new understandings.
SP:c1d77b1cc1c26d9d596eea8d19b4b3607eb218b9
GINN: Fast GPU-TEE Based Integrity for Neural Network Training
1 INTRODUCTION . Every day , Deep Learning ( DL ) is incorporated into some new aspects of the society . As a result , numerous industries increasingly rely on DL models to make decisions , ranging from computer vision to natural language processing . The training process for these DL models requires a substantial quantity of computational resources ( often in a distributed fashion ) for training , which traditional CPUs are unable to fulfill . Hence , special hardware , with massive parallel computing capabilities such as GPUs , is often utilized Shi et al . ( 2016 ) . At the same time , the DL model building process is increasingly outsourced to the cloud . This is natural , as applying cloud services ( e.g. , Amazon EC2 , Microsoft Azure or Google Cloud ) for DL training can be more fiscally palatable for companies by enabling them to focus on the software aspect of their products . Nevertheless , such outsourcing raises numerous concerns with respect to the privacy and integrity of the learned models . In recognition of the privacy and integrity concerns around DL ( and Machine Learning ( ML ) in general ) , a considerable amount of research has been dedicated to applied cryptography , in three general areas : 1 ) Multi-Party Computation ( MPC ) ( e.g. , Mohassel & Zhang ( 2017 ) ) , 2 ) Homomorphic Encryption ( HE ) ( e.g. , Gilad-Bachrach et al . ( 2016 ) ) , and 3 ) Trusted Execution Environment ( TEE ) ( e.g. , Hunt et al . ( 2018 ) ; Hynes et al . ( 2018 ) ) . However , the majority of these investigations are limited in that : 1 ) they are only applicable to simple shallow network models , 2 ) they are evaluated with datasets that have a small number of records ( such as MNIST LeCun & Cortes ( 2010 ) and CIFAR10 Krizhevsky et al . ) , and 3 ) they incur a substantial amount of overhead that is unacceptable for real-life DL training workloads . In their effort to mitigate some of these problems , and securely move from CPUs to GPUs , Slalom Tramèr & Boneh ( 2019 ) mainly focus on the computational integrity at the test phase while depending on the application context . It can also support enhanced data privacy , however , at a much greater performance cost . To address these limitations , we introduce GINN ( See Figure 1 ) ; a framework for integritypreserving learning as a service that provides integrity guarantees in outsourced DL model training in TEEs . We assume that only the TEE running in the cloud is trusted , and all the other resources such as GPUs can be controlled by an attacker to launch an attack ( e.g. , insert a trojan ) . In this context , our goal is to support the realistic deep learning training workloads while ensuring data and model integrity . To achieve this goal , we focus on the settings where maintaining the learning process ’ s integrity is critical , while the training data may not contain privacy sensitive information . For example , we may want to build a traffic sign detection model on public traffic sign images and may still like to prevent attacks that can insert trojan during the training phase . Furthermore , we want to provide assurances that the model is trained on the specified dataset , with known parameters so that the performance of the model can be replicated and audited for accountability and integrity . The trivial approach of executing the entire learning process inside a TEE is not scalable since TEEs are much slower compared to GPUs . Furthermore , even the existing performance improvement techniques ( e.g. , random matrix verification provided in Tramèr & Boneh ( 2019 ) ) are not enough to scale up to large DL model learning settings . To alleviate the TEE bottleneck , we propose incorporating random verification of the computation steps . This strategy is based on the observation that it is unnecessary to verify all of the GPU ’ s computation steps . Rather , we only need to verify occasionally to catch any deviation with a very high likelihood . Given that random verification may itself be insufficient ( theoretically , an attacker can launch a successful attack by modifying only a single unconstrained gradient update ) , we further show how parts of the DL hyperparameter setting process , such as clipping rate should be modified to prevent single step attacks , and require a larger number of malicious updates by an attacker that controls the GPU . Simply , GINN limits the amount of change an adversary can inflict on a model through a single SGD step . As a consequence , the adversary is forced to keep attacking while randomly being verified by the TEE . Using the state-of-the-art backdoor attacks , we illustrate that random verification technique can detect attacks with a high probability ( e.g. , 0.99 ) while enabling 2x-20x performance gains compared to pure TEE based solutions . The specific contributions of this paper are as follows : • We introduce the first approach to support integrity-preserving DL training by random verification of stochastic gradient ( SGD ) steps inside TEE to ensure the integrity of training pipeline data , parameters , computation function , etc . with a high probability . • We illustrate how gradient clipping can be used as a defensive measure against single ( or infrequent ) step attack in combination with random verification . • We show the effectiveness of our TEE random verification and gradient clipping through extensive experimentation on DNN backdoor attacks . 2 BACKGROUND AND RELATED WORKS . Our system combines deep learning training on specialized fast hardware such as Graphical Processing Units ( GPU ) with Intel Software Guard Extensions ( SGX ) based TEE to ensure the produced model ’ s integrity . Details on SGD training and gradient clipping are provided in Appendices B and C . 2.1 ATTACKS ON DNN MODELS IN TRAINING PHASE . Attacks on DNN models can be realized during both training or test phases . However , GINN is concerned with integrity/accountability issues during the training phase of DNN models , such that attacks related to testing are out of the scope of this paper since test time attacks have been addressed before ( e.g. , Slalom Tramèr & Boneh ( 2019 ) ) . In the literature , particularly in the computer vision domain , targeted trojan attacks on DNN classification models have become a real concern as deep learning has grown in its adoption . These attacks tend to alter the prediction of models if a specific condition in the input is met . These conditions may be feature-based Gu et al . ( 2017 ) ; Liu et al . ( 2017 ) ; Turner et al . ( 2018 ) or instance-based Chen et al . ( 2017 ) ; Shafahi et al . ( 2018 ) . Recently , trojan attacks have been extended to Reinforcement Learning ( RL ) and text classification models Panagiota Kiourti ( 2019 ) ; Sun ( 2020 ) . In practice , these attacks are implemented by manipulating samples during training through data poisoning . For instance , stamping images with a pattern and modifying its label . Interestingly , these models provide similar competitive classification test accuracy compared to clean models ( i.e. , models have not been attacked ) . As a consequence , it is non-trivial to distinguish these trojaned models from non-trojaned ones based on model accuracy alone . To make matters worse , even if the model owner was aware of examples of the trojan trigger pattern , the owner would need to patch the model through re-training to dampen the efficacy of the trojan trigger pattern . Retraining does not always guarantee complete removal of the trojan behavior from the model . To date , various techniques have been proposed to diagnose and mitigate of trojaned models . However , all approaches are either based on unrealistic assumptions or are excessively costly . For instance , the Neural Cleanse Wang et al . ( 2019 ) requires access to a sizable sample of clean inputs to reverse-engineer the backdoor and has shown to be successful only for trigger patterns with a relatively small size . ABS Liu et al . ( 2019 ) improves upon Neural Cleanse in that requires a significantly smaller number of samples ; however , it assumes that the responsible trojan neurons can activate trojan behavior independently from each other , which is unlikely to be true in practice . Attacking the training pipeline to inject a trojan ( s ) in the final model is the cheapest and , thus , is likely the most desirable form of attack for real-world adversaries to launch . As such , throughout this work , we mainly focus on showing our methods ’ effectiveness in preventing this type of attack from happening . It should be noted that our method is orthogonal to attack type and is sufficiently generic to catch any continuous attack during the training of a DNN model . GINN relies upon proactive training as opposed to post-training or deployment-time methods to assess the health of a DNN model . As we explain later in section 3 , we assume that the initial training dataset is provided by an honest user and is free of manipulation . With this as a basis , GINN limits the amount of change an adversary can inflict on a model through a single SGD step . As a consequence , the adversary is forced to keep attacking while randomly being verified by the TEE . 2.2 INTEGRITY FOR DNN TRAINING . GINN ’ s main goal is to enable high-integrity training pipeline so that end users are assured that the model is built on the specified dataset , using specified parameters without modification . Thus , the final model users know who built the model , what dataset was used for training , and what algorithms were put in place for building the model . If , at any point during training , GINN GINN detects a deviation from the specified execution , it will not sign the final model to ascertain its validity . Tramèr & Boneh ( 2019 ) took a first step towards achieving both fast and reliable execution in the test phase but neglected the training phase . The training phase is far more computationally demanding than the test phase , such that verification of all steps in training requires a substantially longer time . Since parameters keep changing , we can not benefit from pre-computation . Second , backward pass involves computing gradients for both the inputs and the parameters and takes longer than forward pass . Despite the mentioned hurdles , as our investigation shows , it may not be necessary to verify every step to achieve integrity guarantees with high probability . 2.3 INTEL SGX . SGX Costan & Devadas ( 2016 ) is an example of a common TEE that is available in many modern-day computers . As outlined in Table 2 , it provides a secluded hardware reserved area , namely , processor reserved memory ( PRM ) , that is kept private ( i.e. , it is not readable in plaintext ) from the host , or any privileged processes , and is free from direct undetected tampering . It also supports remote attestation , such that users can attest the platform and the running code within enclave before provisioning their secrets to a remote server . Calls from routines that should transition to/from enclave are handled through predefined entry points that are called Ecall/Ocall that must be defined in advance , before building the enclave image . While it provides security and privacy for numerous applications ( e.g. , Priebe et al . ( 2018 ) ; Shaon et al . ( 2017 ) ; Kunkel et al . ( 2019 ) ) , due to its limited memory and computational capacity , directly running unmodified applications inside SGX can induce a significant hit on performance . This is especially the case for applications that require large amounts of memory , such as training DNNs .
The paper presents new techniques to enable secure neural training in the TEE+GPU paradigm. This is a natural extension of previous work like Slalom which only handles the inference case. The authors propose a two-step approach. First, they clip the gradients during training to force the attacker to insert multiple deviations to influence the model. They then randomly verify the integrity a subset of the gradient updates to check for tampering. Combining these two ideas the authors demonstrate a 2-20x improvement over a TEE only benchmark.
SP:e3455519fcc3ce8644fa55c52771b5414a571026
GINN: Fast GPU-TEE Based Integrity for Neural Network Training
1 INTRODUCTION . Every day , Deep Learning ( DL ) is incorporated into some new aspects of the society . As a result , numerous industries increasingly rely on DL models to make decisions , ranging from computer vision to natural language processing . The training process for these DL models requires a substantial quantity of computational resources ( often in a distributed fashion ) for training , which traditional CPUs are unable to fulfill . Hence , special hardware , with massive parallel computing capabilities such as GPUs , is often utilized Shi et al . ( 2016 ) . At the same time , the DL model building process is increasingly outsourced to the cloud . This is natural , as applying cloud services ( e.g. , Amazon EC2 , Microsoft Azure or Google Cloud ) for DL training can be more fiscally palatable for companies by enabling them to focus on the software aspect of their products . Nevertheless , such outsourcing raises numerous concerns with respect to the privacy and integrity of the learned models . In recognition of the privacy and integrity concerns around DL ( and Machine Learning ( ML ) in general ) , a considerable amount of research has been dedicated to applied cryptography , in three general areas : 1 ) Multi-Party Computation ( MPC ) ( e.g. , Mohassel & Zhang ( 2017 ) ) , 2 ) Homomorphic Encryption ( HE ) ( e.g. , Gilad-Bachrach et al . ( 2016 ) ) , and 3 ) Trusted Execution Environment ( TEE ) ( e.g. , Hunt et al . ( 2018 ) ; Hynes et al . ( 2018 ) ) . However , the majority of these investigations are limited in that : 1 ) they are only applicable to simple shallow network models , 2 ) they are evaluated with datasets that have a small number of records ( such as MNIST LeCun & Cortes ( 2010 ) and CIFAR10 Krizhevsky et al . ) , and 3 ) they incur a substantial amount of overhead that is unacceptable for real-life DL training workloads . In their effort to mitigate some of these problems , and securely move from CPUs to GPUs , Slalom Tramèr & Boneh ( 2019 ) mainly focus on the computational integrity at the test phase while depending on the application context . It can also support enhanced data privacy , however , at a much greater performance cost . To address these limitations , we introduce GINN ( See Figure 1 ) ; a framework for integritypreserving learning as a service that provides integrity guarantees in outsourced DL model training in TEEs . We assume that only the TEE running in the cloud is trusted , and all the other resources such as GPUs can be controlled by an attacker to launch an attack ( e.g. , insert a trojan ) . In this context , our goal is to support the realistic deep learning training workloads while ensuring data and model integrity . To achieve this goal , we focus on the settings where maintaining the learning process ’ s integrity is critical , while the training data may not contain privacy sensitive information . For example , we may want to build a traffic sign detection model on public traffic sign images and may still like to prevent attacks that can insert trojan during the training phase . Furthermore , we want to provide assurances that the model is trained on the specified dataset , with known parameters so that the performance of the model can be replicated and audited for accountability and integrity . The trivial approach of executing the entire learning process inside a TEE is not scalable since TEEs are much slower compared to GPUs . Furthermore , even the existing performance improvement techniques ( e.g. , random matrix verification provided in Tramèr & Boneh ( 2019 ) ) are not enough to scale up to large DL model learning settings . To alleviate the TEE bottleneck , we propose incorporating random verification of the computation steps . This strategy is based on the observation that it is unnecessary to verify all of the GPU ’ s computation steps . Rather , we only need to verify occasionally to catch any deviation with a very high likelihood . Given that random verification may itself be insufficient ( theoretically , an attacker can launch a successful attack by modifying only a single unconstrained gradient update ) , we further show how parts of the DL hyperparameter setting process , such as clipping rate should be modified to prevent single step attacks , and require a larger number of malicious updates by an attacker that controls the GPU . Simply , GINN limits the amount of change an adversary can inflict on a model through a single SGD step . As a consequence , the adversary is forced to keep attacking while randomly being verified by the TEE . Using the state-of-the-art backdoor attacks , we illustrate that random verification technique can detect attacks with a high probability ( e.g. , 0.99 ) while enabling 2x-20x performance gains compared to pure TEE based solutions . The specific contributions of this paper are as follows : • We introduce the first approach to support integrity-preserving DL training by random verification of stochastic gradient ( SGD ) steps inside TEE to ensure the integrity of training pipeline data , parameters , computation function , etc . with a high probability . • We illustrate how gradient clipping can be used as a defensive measure against single ( or infrequent ) step attack in combination with random verification . • We show the effectiveness of our TEE random verification and gradient clipping through extensive experimentation on DNN backdoor attacks . 2 BACKGROUND AND RELATED WORKS . Our system combines deep learning training on specialized fast hardware such as Graphical Processing Units ( GPU ) with Intel Software Guard Extensions ( SGX ) based TEE to ensure the produced model ’ s integrity . Details on SGD training and gradient clipping are provided in Appendices B and C . 2.1 ATTACKS ON DNN MODELS IN TRAINING PHASE . Attacks on DNN models can be realized during both training or test phases . However , GINN is concerned with integrity/accountability issues during the training phase of DNN models , such that attacks related to testing are out of the scope of this paper since test time attacks have been addressed before ( e.g. , Slalom Tramèr & Boneh ( 2019 ) ) . In the literature , particularly in the computer vision domain , targeted trojan attacks on DNN classification models have become a real concern as deep learning has grown in its adoption . These attacks tend to alter the prediction of models if a specific condition in the input is met . These conditions may be feature-based Gu et al . ( 2017 ) ; Liu et al . ( 2017 ) ; Turner et al . ( 2018 ) or instance-based Chen et al . ( 2017 ) ; Shafahi et al . ( 2018 ) . Recently , trojan attacks have been extended to Reinforcement Learning ( RL ) and text classification models Panagiota Kiourti ( 2019 ) ; Sun ( 2020 ) . In practice , these attacks are implemented by manipulating samples during training through data poisoning . For instance , stamping images with a pattern and modifying its label . Interestingly , these models provide similar competitive classification test accuracy compared to clean models ( i.e. , models have not been attacked ) . As a consequence , it is non-trivial to distinguish these trojaned models from non-trojaned ones based on model accuracy alone . To make matters worse , even if the model owner was aware of examples of the trojan trigger pattern , the owner would need to patch the model through re-training to dampen the efficacy of the trojan trigger pattern . Retraining does not always guarantee complete removal of the trojan behavior from the model . To date , various techniques have been proposed to diagnose and mitigate of trojaned models . However , all approaches are either based on unrealistic assumptions or are excessively costly . For instance , the Neural Cleanse Wang et al . ( 2019 ) requires access to a sizable sample of clean inputs to reverse-engineer the backdoor and has shown to be successful only for trigger patterns with a relatively small size . ABS Liu et al . ( 2019 ) improves upon Neural Cleanse in that requires a significantly smaller number of samples ; however , it assumes that the responsible trojan neurons can activate trojan behavior independently from each other , which is unlikely to be true in practice . Attacking the training pipeline to inject a trojan ( s ) in the final model is the cheapest and , thus , is likely the most desirable form of attack for real-world adversaries to launch . As such , throughout this work , we mainly focus on showing our methods ’ effectiveness in preventing this type of attack from happening . It should be noted that our method is orthogonal to attack type and is sufficiently generic to catch any continuous attack during the training of a DNN model . GINN relies upon proactive training as opposed to post-training or deployment-time methods to assess the health of a DNN model . As we explain later in section 3 , we assume that the initial training dataset is provided by an honest user and is free of manipulation . With this as a basis , GINN limits the amount of change an adversary can inflict on a model through a single SGD step . As a consequence , the adversary is forced to keep attacking while randomly being verified by the TEE . 2.2 INTEGRITY FOR DNN TRAINING . GINN ’ s main goal is to enable high-integrity training pipeline so that end users are assured that the model is built on the specified dataset , using specified parameters without modification . Thus , the final model users know who built the model , what dataset was used for training , and what algorithms were put in place for building the model . If , at any point during training , GINN GINN detects a deviation from the specified execution , it will not sign the final model to ascertain its validity . Tramèr & Boneh ( 2019 ) took a first step towards achieving both fast and reliable execution in the test phase but neglected the training phase . The training phase is far more computationally demanding than the test phase , such that verification of all steps in training requires a substantially longer time . Since parameters keep changing , we can not benefit from pre-computation . Second , backward pass involves computing gradients for both the inputs and the parameters and takes longer than forward pass . Despite the mentioned hurdles , as our investigation shows , it may not be necessary to verify every step to achieve integrity guarantees with high probability . 2.3 INTEL SGX . SGX Costan & Devadas ( 2016 ) is an example of a common TEE that is available in many modern-day computers . As outlined in Table 2 , it provides a secluded hardware reserved area , namely , processor reserved memory ( PRM ) , that is kept private ( i.e. , it is not readable in plaintext ) from the host , or any privileged processes , and is free from direct undetected tampering . It also supports remote attestation , such that users can attest the platform and the running code within enclave before provisioning their secrets to a remote server . Calls from routines that should transition to/from enclave are handled through predefined entry points that are called Ecall/Ocall that must be defined in advance , before building the enclave image . While it provides security and privacy for numerous applications ( e.g. , Priebe et al . ( 2018 ) ; Shaon et al . ( 2017 ) ; Kunkel et al . ( 2019 ) ) , due to its limited memory and computational capacity , directly running unmodified applications inside SGX can induce a significant hit on performance . This is especially the case for applications that require large amounts of memory , such as training DNNs .
The paper targets security challenges of deep neural networks. While solutions can hardly scale up to support realistic DNN model training workloads, the authors propose GINN to support integrity-preserving DL training by random verification of stochastic gradient steps inside trusted execution environments (TEE). GINN combines random verification and gradient clipping to achieve good performance. The experimental results show that GINN achieves 2x-20x performance improvement compared with the pure TEE based solution while guaranteeing the integrity with high probability.
SP:e3455519fcc3ce8644fa55c52771b5414a571026
Stego Networks: Information Hiding on Deep Neural Networks
1 INTRODUCTION . As much as it goes without saying knowledge is power , inventing methods for keeping and selectively conveying secret messages has been a crucial mission throughout the history of humanity . Among various methods to protect secrets , an effective approach called steganography makes it difficult to detect the very existence of the secrets in an object looking innocuous . The object containing the secrets is called a stego medium in the context of steganography . Starting with the case of hiding a secret message in the form of engraved tattoos on hidden parts of a human body in an ancient greek period , numerous methods ( e.g. , using invisible inks , writing tiny-sized letters ) were employed to transmit information without leaving detectable footprints ( Kahn , 1996 ) . Most recently , digital steganography , which embeds secret messages in digital images or audio files , has been actively developed . Traditional steganography is typically used in communication between two individuals , but steganography in digital media enables its brand-new usage by conveying secrets in a multitude of devices and unknowingly influencing their behavior when accompanied with a small decoding code . The secrets in this scenario are often called stegomalware ( Nagaraja et al. , 2011 ; Suarez-Tangil et al. , 2014 ) . Meanwhile , deep neural networks ( DNNs ) have shown remarkable success in various areas over the years , and are now beginning to be applied to industry and the consumer sector as well as to the academic . DNNs have been deployed in a variety of computing systems ranging from largescale cloud computing systems to millions of mobile devices ( Howard et al. , 2017 ) . More and more mobile devices are running application programs that include deep learning models with numerous camera filter and speech recognition applications being good examples . Furthermore , building upon existing large pre-trained models , such as ResNet ( He et al. , 2016 ) , BERT ( Devlin et al. , 2019 ) , and GPT-3 ( Brown et al. , 2020 ) , rather than training complete neural networks from scratch has become a trend in deep learning research . Accordingly , various files containing neural network parameters are uploaded on the source code repositories , and FTP servers , and they are frequently exchanged among individuals and organizations . The fraction bits , which are sometimes called the significand or the mantissa , of a floating-point number follows a distribution close to uniform distribution as shown in Figure 1 . A secret sender can easily embed encrypted messages , which also typically follows a uniform distribution , without causing much suspicion from static analysis tools . Also , as the sizes of pre-trained neural network parameters are often large being more than hundreds of megabytes in size , neural network parameters are suitable media to exchange a nontrivial amount of secrets . In this paper , we analyze the distribution of fraction bits of typical neural network parameters . Fraction bits , which contain the least significant information of a floating-point number , can conveniently be used to embed secret messages . We experimented with a special kind of weight perturbation , which simulate general cases of hiding secrets in network parameters and explored several methods to inject arbitrary data into neural network parameters without noticeable performance degradation . We empirically showed that steganography in the least significant fraction bits is readily applicable and even steganography in the most significant fraction bits is also possible . Our main contributions are as follows : • We demonstrate suitability of neural network parameters as a novel steganographic medium . • We propose novel approaches to effectively embed secrets in neural network parameters . • We give comprehensive analysis of stego networks in terms of security and capacity , and discuss the advantages of stego networks over the conventional stego media . 2 BACKGROUND . This section covers materials to facilitate understanding of this paper , which includes the fundamentals of steganography and related work . 2.1 STEGANOGRAPHY . Steganography is a technique to conceal information from unauthorized persons or eavesdroppers . It is a sub-field of information hiding . The goal of steganography is to hide the presence of the secret by embedding it in a camouflage medium , which is called a cover medium . There exist three aspects in steganography that decide its effectiveness : security , capacity , and robustness ( Provos & Honeyman , 2003 ; Cox et al. , 2007 ) . Among them , security is often considered the most important factor . Security means that the presence of the secret must not be revealed under any situation . The next one , capacity , means the amount of information that the steganographic system can carry . Usually the risk of being detected increases as capacity increases . Robustness means the durability from external noise or interruption to eliminate the embedded information . In the context of stego networks , robustness can be degraded if re-training or fine-tuning is applied . Among various cover media , we mainly compare stego networks with images because it is one of the most actively studied media of secret information and image files are widely-used file formats among deep learning researchers and practitioners . Usually , each pixel of a three-channel RGB image is represented by a total of 24 bits , eight bits per channel . A trivial approach to hide a secret image in a cover image is to replace the least significant bits in the pixel values of the cover image with the most significant bits of the secret image . However , this approach is vulnerable because the secret image can be seen in the stego image due to humans ’ highly sensitive visual perception . The process that reveals the existence of steganography by visual characteristic is called visual attack . Also , digital images follow a certain distribution of pixel values that can be easily distinguished . Steganalysis refers to the process of detecting the steganography by capturing the statistical anomaly of data . Several techniques have been proposed such as a subtractive pixel adjacency matrix ( SPAM ) ( Pevny et al. , 2010 ) to neutralize the steganographic systems . Currently , well-known steganography algorithms include HUGO , WOW , and S-UNIWARD ( Pevnỳ et al. , 2010 ; Holub & Fridrich , 2012 ; Holub et al. , 2014 ) . 2.2 RELATED WORK . There have been numerous approaches in the literature that applied deep neural network to steganalysis . The objective of some of these studies was to construct neural networks , which can determine whether hidden contents is embedded in an input image ( Xu et al. , 2016 ; Yedroudj et al. , 2018 ) . On the other hand , Baluja ( 2017 ) and Zhu et al . ( 2018 ) proposed methods to generate stego images . Our work is clearly different from these studies in that our work considers the neural network itself as a cover medium of steganography . To the best of our knowledge , there has not been a principled study of neural networks as cover media of steganography so far . One work relevant to of ours is LSB embedding attack proposed by Song et al . ( 2017 ) . It the work , a scenario is suggested where an adversary intercepts training data with compromised training algorithm which hides the data in the LSFBs of the trained neural network parameters . The data embedding method can be seen as a special form of LSFB steganography which we explain in Section 3 . However , Song et al . ( 2017 ) does not include discussion of neural network parameters in terms of steganographic security , capacity , and robustness , which are the three main aspects of steganography . Therefore , it can hardly be seen as study of properteis of neural networks as steganographic cover media . Our work is closely related with research on sensitivity and precision of neural network parameters . Cheney et al . ( 2017 ) conducted a sensitivity analysis on the convolutional neural networks ( CNNs ) and reported that AlexNet ( Krizhevsky et al. , 2012 ) layers close to the input showed relatively high sensitivity to weight perturbation than those layers close to the output . Weight quantization ( McKinstry et al. , 2018 ; Wang et al. , 2018 ; Zafrir et al. , 2019 ; Jung et al. , 2019 ) is also an actively studied research area for energy and storage efficiencies . Studies in this field showed that neural networks can preserve their performance although a half or less number of bits are used to represent parameters . In terms of DNN security , weight poisoning ( Gu et al. , 2017 ) and adversarial attack ( Goodfellow et al. , 2015 ) also have been frequently studied . Both methods aim to manipulate the output of neural networks , but the difference between the two methods is that weight poisoning perturbs neural network parameters while adversarial attack perturbs the input of the network . In the case of adversarial attack , its counter-measures called adversarial defense ( Metzen et al. , 2017 ) and their counter-counter-measures have also been continuously studied ( Ghiasi et al. , 2020 ) . Another closely related field to our work is watermarking . Both watermarking and steganography belong to the area of information hiding . While the top priority of steganography is security , watermarking focuses on robustness , in order to prevent attempts to removing information stored in cover media . Watermarking on neural networks can be used to integrity check and copyright protection of the network . Adi et al . ( 2018 ) prepared a trigger dataset , which is completely unrelated to the original task , and train or fine-tune a model to yield 100 % accuracy on the trigger set . Then , the auditor can determine whether the model is watermarked or not by checking the accuracy of the trigger set . Similarly , Le Merrer et al . ( 2020 ) adjusted the decision boundary of the model for specific inputs . Instead of examining the output , there exists a method that probe the activation values from the networks for a private input ( Darvish Rouhani et al. , 2019 ) . Lastly , Uchida et al . ( 2017 ) added a regularization loss on the network parameters to embed watermark . The watermark is a form of a binary vector , which can be recovered by multiplying an embedding matrix with the flattened parameters . In the case of watermarking , it usually leaves the noticeable trace on the network parameters , thus it can not transmit information confidentially . 3 STEGO NETWORKS . This section presents the analysis on the sample distributions of fraction bits of typical neural network parameters . Afterwards , we propose two of our novel approaches to embed secret messages in neural networks . 3.1 FRACTION BITS OF NEURAL NETWORK PARAMETERS . Neural network parameters are usually in floating-point types . A floating-point number consists of three parts for sign , exponent and fraction bits1 . For example , a single-precision ( FP32 ) format has one sign bit , eight exponent bits , and 23 fraction bits . Since FP32 is one of the most widely used formats in deep learning research , we use the format throughout this paper . The least significant fraction bit has the lowest index , i.e. , zero . Since embedding messages in sign or exponent bits 1Details of floating-point number are described in IEEE Standard for Binary Floating-Point Arithmetic . can induce a relatively significant perturbation to the original value , we only consider the case of embedding bits in fraction bits . The fraction bit distributions of the parameters of a few models commonly used in computer vision are provided in Figure 1 . We divide the 23 bits of fraction bits into three parts indexed as 0-7 , 8- 15 , and 16-22 , to ease the visualization . We represent a sequence of bits of each part simply by their corresponding decimal value . For example , a bit sequence 10000010 with the right-most bit denoting the bit of the lowest index is represented as 130 in Figure 1 . When it comes to randomness of fraction bits , we particularly considered two issues , which are uniformity of a random variable and independent and identically distributed ( i.i.d . ) property of a sequence of random variables . We can denote the part of FP32 fraction bits indexed 0-7 as Xi with |Xi| = 256 , and i is the index of the corresponding FP32 number in the entire parameters . Then one may think Xi is a uniform random variable if she observes each outcome happens with almost the same frequency . On the other hand , if one often observes the same outcomes consecutively , she may think the sequence of Xi is not i.i.d . If Xi is uniform and a sequence of Xi are i.i.d. , then the sequence has the maximum entropy . Note that a message embedder usually encrypt secret messages to provide additional protection . Also , encrypting messages makes it difficult for a steganalysis system to detect the existence of the message since much of the traces disappears . Encryption algorithms such as AES and RSA are often involved before message embedding . In the rest of this section , we assume message embedders always encrypt secret messages . Uniformity of a random variable is directly related with the capacity of stego networks and an secret embedder may need to sacrifice the bit rate if the bit distribution of a target medium is not uniform . Figure 1 shows distributions of the three fraction parts of five popular models in computer vision . The two least significant parts show nearly uniform distributions across all the models . The most significant seven bits seemingly followed non-uniform distributions . In this case , a secret sender may want to add dummy bits in the original secret message to avoid a discrepancy between the fraction bit distribution of the original neural network and the resulting stego network to ensure security . Another important property relevant to steganographic security is the i.i.d . property . This property is what various steganalysis algorithms are based on to detect anomaly in stego media . For example , SPAM algorithm ( Pevny et al. , 2010 ) utilizes the fact that pixels of natural images have high spatial correlation such that the colors of neighboring pixels have similar values . Therefore , if the usual bits of a cover medium are purely i.i.d. , it automatically wards off a large number of sophisticated steganalysis systems . Note that establishing theoretical independence between the significands of parameters is out of scope in our work . Instead , we mainly provide empirical results that has practical implications in the experiment section . From the experimental results , we suspect that significands of stego networks exhibit considerable low dependency between them compared to the storages of other popular steganographic media .
This paper proposed a method to hide information in the parameters of neural network models. To avoid significant perturbation, the paper only considers embed the information in the fraction bits of the parameters. The paper considers hiding the information in either the least significant bits of the most significant bits. Hiding in the least significant bits is harder to be detected but the message can also be easily removed without much degradation in the model performance. On the other hand, information hiding in the most significant fraction bits will be very hard to remove without model performance degradation but also harder to embed the message for the same reason. Sensitivity analysis to select least sensitive parameters to use and fine-tuning after embedding to recover the model performance can be two remedies.
SP:8dd0ec8f16a72dad739f6e31b9ced0a8e8989b9e
Stego Networks: Information Hiding on Deep Neural Networks
1 INTRODUCTION . As much as it goes without saying knowledge is power , inventing methods for keeping and selectively conveying secret messages has been a crucial mission throughout the history of humanity . Among various methods to protect secrets , an effective approach called steganography makes it difficult to detect the very existence of the secrets in an object looking innocuous . The object containing the secrets is called a stego medium in the context of steganography . Starting with the case of hiding a secret message in the form of engraved tattoos on hidden parts of a human body in an ancient greek period , numerous methods ( e.g. , using invisible inks , writing tiny-sized letters ) were employed to transmit information without leaving detectable footprints ( Kahn , 1996 ) . Most recently , digital steganography , which embeds secret messages in digital images or audio files , has been actively developed . Traditional steganography is typically used in communication between two individuals , but steganography in digital media enables its brand-new usage by conveying secrets in a multitude of devices and unknowingly influencing their behavior when accompanied with a small decoding code . The secrets in this scenario are often called stegomalware ( Nagaraja et al. , 2011 ; Suarez-Tangil et al. , 2014 ) . Meanwhile , deep neural networks ( DNNs ) have shown remarkable success in various areas over the years , and are now beginning to be applied to industry and the consumer sector as well as to the academic . DNNs have been deployed in a variety of computing systems ranging from largescale cloud computing systems to millions of mobile devices ( Howard et al. , 2017 ) . More and more mobile devices are running application programs that include deep learning models with numerous camera filter and speech recognition applications being good examples . Furthermore , building upon existing large pre-trained models , such as ResNet ( He et al. , 2016 ) , BERT ( Devlin et al. , 2019 ) , and GPT-3 ( Brown et al. , 2020 ) , rather than training complete neural networks from scratch has become a trend in deep learning research . Accordingly , various files containing neural network parameters are uploaded on the source code repositories , and FTP servers , and they are frequently exchanged among individuals and organizations . The fraction bits , which are sometimes called the significand or the mantissa , of a floating-point number follows a distribution close to uniform distribution as shown in Figure 1 . A secret sender can easily embed encrypted messages , which also typically follows a uniform distribution , without causing much suspicion from static analysis tools . Also , as the sizes of pre-trained neural network parameters are often large being more than hundreds of megabytes in size , neural network parameters are suitable media to exchange a nontrivial amount of secrets . In this paper , we analyze the distribution of fraction bits of typical neural network parameters . Fraction bits , which contain the least significant information of a floating-point number , can conveniently be used to embed secret messages . We experimented with a special kind of weight perturbation , which simulate general cases of hiding secrets in network parameters and explored several methods to inject arbitrary data into neural network parameters without noticeable performance degradation . We empirically showed that steganography in the least significant fraction bits is readily applicable and even steganography in the most significant fraction bits is also possible . Our main contributions are as follows : • We demonstrate suitability of neural network parameters as a novel steganographic medium . • We propose novel approaches to effectively embed secrets in neural network parameters . • We give comprehensive analysis of stego networks in terms of security and capacity , and discuss the advantages of stego networks over the conventional stego media . 2 BACKGROUND . This section covers materials to facilitate understanding of this paper , which includes the fundamentals of steganography and related work . 2.1 STEGANOGRAPHY . Steganography is a technique to conceal information from unauthorized persons or eavesdroppers . It is a sub-field of information hiding . The goal of steganography is to hide the presence of the secret by embedding it in a camouflage medium , which is called a cover medium . There exist three aspects in steganography that decide its effectiveness : security , capacity , and robustness ( Provos & Honeyman , 2003 ; Cox et al. , 2007 ) . Among them , security is often considered the most important factor . Security means that the presence of the secret must not be revealed under any situation . The next one , capacity , means the amount of information that the steganographic system can carry . Usually the risk of being detected increases as capacity increases . Robustness means the durability from external noise or interruption to eliminate the embedded information . In the context of stego networks , robustness can be degraded if re-training or fine-tuning is applied . Among various cover media , we mainly compare stego networks with images because it is one of the most actively studied media of secret information and image files are widely-used file formats among deep learning researchers and practitioners . Usually , each pixel of a three-channel RGB image is represented by a total of 24 bits , eight bits per channel . A trivial approach to hide a secret image in a cover image is to replace the least significant bits in the pixel values of the cover image with the most significant bits of the secret image . However , this approach is vulnerable because the secret image can be seen in the stego image due to humans ’ highly sensitive visual perception . The process that reveals the existence of steganography by visual characteristic is called visual attack . Also , digital images follow a certain distribution of pixel values that can be easily distinguished . Steganalysis refers to the process of detecting the steganography by capturing the statistical anomaly of data . Several techniques have been proposed such as a subtractive pixel adjacency matrix ( SPAM ) ( Pevny et al. , 2010 ) to neutralize the steganographic systems . Currently , well-known steganography algorithms include HUGO , WOW , and S-UNIWARD ( Pevnỳ et al. , 2010 ; Holub & Fridrich , 2012 ; Holub et al. , 2014 ) . 2.2 RELATED WORK . There have been numerous approaches in the literature that applied deep neural network to steganalysis . The objective of some of these studies was to construct neural networks , which can determine whether hidden contents is embedded in an input image ( Xu et al. , 2016 ; Yedroudj et al. , 2018 ) . On the other hand , Baluja ( 2017 ) and Zhu et al . ( 2018 ) proposed methods to generate stego images . Our work is clearly different from these studies in that our work considers the neural network itself as a cover medium of steganography . To the best of our knowledge , there has not been a principled study of neural networks as cover media of steganography so far . One work relevant to of ours is LSB embedding attack proposed by Song et al . ( 2017 ) . It the work , a scenario is suggested where an adversary intercepts training data with compromised training algorithm which hides the data in the LSFBs of the trained neural network parameters . The data embedding method can be seen as a special form of LSFB steganography which we explain in Section 3 . However , Song et al . ( 2017 ) does not include discussion of neural network parameters in terms of steganographic security , capacity , and robustness , which are the three main aspects of steganography . Therefore , it can hardly be seen as study of properteis of neural networks as steganographic cover media . Our work is closely related with research on sensitivity and precision of neural network parameters . Cheney et al . ( 2017 ) conducted a sensitivity analysis on the convolutional neural networks ( CNNs ) and reported that AlexNet ( Krizhevsky et al. , 2012 ) layers close to the input showed relatively high sensitivity to weight perturbation than those layers close to the output . Weight quantization ( McKinstry et al. , 2018 ; Wang et al. , 2018 ; Zafrir et al. , 2019 ; Jung et al. , 2019 ) is also an actively studied research area for energy and storage efficiencies . Studies in this field showed that neural networks can preserve their performance although a half or less number of bits are used to represent parameters . In terms of DNN security , weight poisoning ( Gu et al. , 2017 ) and adversarial attack ( Goodfellow et al. , 2015 ) also have been frequently studied . Both methods aim to manipulate the output of neural networks , but the difference between the two methods is that weight poisoning perturbs neural network parameters while adversarial attack perturbs the input of the network . In the case of adversarial attack , its counter-measures called adversarial defense ( Metzen et al. , 2017 ) and their counter-counter-measures have also been continuously studied ( Ghiasi et al. , 2020 ) . Another closely related field to our work is watermarking . Both watermarking and steganography belong to the area of information hiding . While the top priority of steganography is security , watermarking focuses on robustness , in order to prevent attempts to removing information stored in cover media . Watermarking on neural networks can be used to integrity check and copyright protection of the network . Adi et al . ( 2018 ) prepared a trigger dataset , which is completely unrelated to the original task , and train or fine-tune a model to yield 100 % accuracy on the trigger set . Then , the auditor can determine whether the model is watermarked or not by checking the accuracy of the trigger set . Similarly , Le Merrer et al . ( 2020 ) adjusted the decision boundary of the model for specific inputs . Instead of examining the output , there exists a method that probe the activation values from the networks for a private input ( Darvish Rouhani et al. , 2019 ) . Lastly , Uchida et al . ( 2017 ) added a regularization loss on the network parameters to embed watermark . The watermark is a form of a binary vector , which can be recovered by multiplying an embedding matrix with the flattened parameters . In the case of watermarking , it usually leaves the noticeable trace on the network parameters , thus it can not transmit information confidentially . 3 STEGO NETWORKS . This section presents the analysis on the sample distributions of fraction bits of typical neural network parameters . Afterwards , we propose two of our novel approaches to embed secret messages in neural networks . 3.1 FRACTION BITS OF NEURAL NETWORK PARAMETERS . Neural network parameters are usually in floating-point types . A floating-point number consists of three parts for sign , exponent and fraction bits1 . For example , a single-precision ( FP32 ) format has one sign bit , eight exponent bits , and 23 fraction bits . Since FP32 is one of the most widely used formats in deep learning research , we use the format throughout this paper . The least significant fraction bit has the lowest index , i.e. , zero . Since embedding messages in sign or exponent bits 1Details of floating-point number are described in IEEE Standard for Binary Floating-Point Arithmetic . can induce a relatively significant perturbation to the original value , we only consider the case of embedding bits in fraction bits . The fraction bit distributions of the parameters of a few models commonly used in computer vision are provided in Figure 1 . We divide the 23 bits of fraction bits into three parts indexed as 0-7 , 8- 15 , and 16-22 , to ease the visualization . We represent a sequence of bits of each part simply by their corresponding decimal value . For example , a bit sequence 10000010 with the right-most bit denoting the bit of the lowest index is represented as 130 in Figure 1 . When it comes to randomness of fraction bits , we particularly considered two issues , which are uniformity of a random variable and independent and identically distributed ( i.i.d . ) property of a sequence of random variables . We can denote the part of FP32 fraction bits indexed 0-7 as Xi with |Xi| = 256 , and i is the index of the corresponding FP32 number in the entire parameters . Then one may think Xi is a uniform random variable if she observes each outcome happens with almost the same frequency . On the other hand , if one often observes the same outcomes consecutively , she may think the sequence of Xi is not i.i.d . If Xi is uniform and a sequence of Xi are i.i.d. , then the sequence has the maximum entropy . Note that a message embedder usually encrypt secret messages to provide additional protection . Also , encrypting messages makes it difficult for a steganalysis system to detect the existence of the message since much of the traces disappears . Encryption algorithms such as AES and RSA are often involved before message embedding . In the rest of this section , we assume message embedders always encrypt secret messages . Uniformity of a random variable is directly related with the capacity of stego networks and an secret embedder may need to sacrifice the bit rate if the bit distribution of a target medium is not uniform . Figure 1 shows distributions of the three fraction parts of five popular models in computer vision . The two least significant parts show nearly uniform distributions across all the models . The most significant seven bits seemingly followed non-uniform distributions . In this case , a secret sender may want to add dummy bits in the original secret message to avoid a discrepancy between the fraction bit distribution of the original neural network and the resulting stego network to ensure security . Another important property relevant to steganographic security is the i.i.d . property . This property is what various steganalysis algorithms are based on to detect anomaly in stego media . For example , SPAM algorithm ( Pevny et al. , 2010 ) utilizes the fact that pixels of natural images have high spatial correlation such that the colors of neighboring pixels have similar values . Therefore , if the usual bits of a cover medium are purely i.i.d. , it automatically wards off a large number of sophisticated steganalysis systems . Note that establishing theoretical independence between the significands of parameters is out of scope in our work . Instead , we mainly provide empirical results that has practical implications in the experiment section . From the experimental results , we suspect that significands of stego networks exhibit considerable low dependency between them compared to the storages of other popular steganographic media .
This paper highlights and studies the interesting possibility of hiding information within neural network weights, which is a form of steganography. The sensitivity of different neural network layers to perturbations is evaluated, and based on this a technique for hiding information is proposed and demonstrated. It is shown that it is possible to hide information in the weights of a number of standard baseline neural networks without being easily detectable.
SP:8dd0ec8f16a72dad739f6e31b9ced0a8e8989b9e
Stabilized Medical Image Attacks
1 INTRODUCTION . Computer Aided Diagnosis ( CADx ) has been widely applied in the medical screening process . The automatic diagnosis benefits doctors to efficiently obtain health status to avoid disease exacerbation . Recently , Convolutional Neural Networks ( CNNs ) have been utilized in CADx to improve the diagnosis accuracy . The discriminative representations improve the performance of medical image analysis including lesion localization , segmentation and disease classification . However , recent advances in adversarial examples have revealed that the deployed CADx systems are usually fragile to adversarial attacks ( Finlayson et al. , 2019 ) , e.g. , small perturbations applied to the input images can deceive CNNs to have opposite conclusions . As mentioned in Ma et al . ( 2020 ) , the vast amount of money in the healthcare economy may attract attackers to commit insurance fraud or false claims of medical reimbursement by manipulating medical reports . Moreover , image noise is a common issue during the data collection process and sometimes these noise perturbations could implicitly form adversarial attacks . For example , particle contamination of optical lens in dermoscopy and endoscopy and metal/respiratory artifacts of CT scans frequently deteriorate the quality of collected images . Therefore , there is a growing interest to investigate how medical diagnosis systems respond to adversarial attacks and what we can do to improve the robustness of the deployed systems . While recent studies of adversarial attacks mainly focus on natural images , the research of adversarial attacks in the medical image domain is desired as there are significant differences between ∗L.Gong and Y . Song are corresponding authors . The code is available at https : //github.com/ imogenqi/SMA two domains . Beyond regular RGB cameras , there are various types of medical imaging equipments ( e.g. , Computed Tomography ( CT ) scanners , ultrasound transducers and fundus cameras ) to generate dramatically different images . Fig . 1 shows three examples where an image captured from fundus camera is in ( a ) , an image captured from the CT scanner is in ( e ) and an endoscopic video frame is in ( i ) . As can be seen in the figure that these three images have little in common . The huge data variance across different modalities of medical images brings more challenges to develop a technology that works for all the modalities . In addition , existing investigations on medical adversarial attacks are limited . In Finlayson et al . ( 2019 ) , adversarial examples are shown to deteriorate the diagnosis accuracy of deep learning based medical systems . These medical attack methods are mainly based on those from natural images ( e.g. , Fast Gradient Sign Method ( FGSM ) ( Goodfellow et al. , 2014 ) and Project Gradient Descent ( PGD ) ( Madry et al. , 2017 ) , which are insufficiently developed for different types of medical data . As shown in Fig . 1 , the adversarial examples generated by FGSM and PGD do not consistently decrease the network ’ s performance in ( b ) , ( c ) , ( f ) , ( g ) , ( j ) and ( k ) . The data variance in ( a ) and ( e ) leads to the inconsistent attack results by existing methods . In this paper , we propose a medical image attack method to consistently produce adversarial perturbations that can fool deep medical diagnosis systems working with different medical data modalities . The perturbations are iteratively generated via taking partial derivatives of a well-defined objective function that is composed of a deviation loss term and a stabilized loss term with respect to the input . By maximizing the deviation loss term , the adversarial attack system enlarges the divergence between CNN predictions and the ground truth to have effective attack samples . To handle the aforementioned ubiquitous data noise issue in medical images , we propose a novel stabilization loss term as an extra regularization , which ensures a consistent deviation trajectory for the crafted attack samples . Meanwhile , the stabilization term avoids the local optima in the optimization process caused by the image noise . The proposed stabilization loss term is designed to measure the difference between two CNN predictions , where the first prediction is from the crafted adversarial sample and the second one is from the same sample processed with a Gaussian smoothing . Given an adversarial example A and its Gaussian smoothed result à , the loss stabilization term constrains the corresponding CNN predictions ( i.e. , f ( A ) and f ( à ) ) to be similar via a minimization process . The intuition from the scale space optimization ( Lindeberg , 1992 ) indicates that the minimization of f ( A ) and f ( à ) will exhaustively search the perturbation space to smooth the single spot for local optimum escape . We further analyze this stabilized loss term via KL-divergence and find that the CNN predictions are steered towards a fixed objective spot during iterations . This stabilization improves the attack effectiveness on different types of medical data including CT , fundus , and endoscopic images . We evaluate the proposed Stablized Medical Image Attack ( SMIA ) on several medical datasets ( APT , 2019 ; EAD , 2019 ; Kag , 2015 ) , including the recent COVID-19 ( COV , 2019 ) lung CT . Thorough evaluations demonstrate that the proposed method is effective to produce perturbations that decrease the prediction accuracy of different medical diagnosis systems . Our investigation provides a guidance for strengthening the robustness of these medical systems towards adversarial attacks . 2 RELATED WORK . In this section , we literately review existing adversarial attack methods on both natural and medical images . Meanwhile , we survey relevant medical image analysis tasks where SMIA is deployed . 2.1 ADVERSARIAL ATTACK . There are extensive investigations on adversarial attacks for natural image classifications . In Goodfellow et al . ( 2014 ) , FGSM was proposed to generate adversarial examples based on the CNN gradients . A DeepFool method was proposed in Moosavi-Dezfooli et al . ( 2016 ) to compute minimal perturbations based on classifier ’ s linearization . In Moosavi-Dezfooli et al . ( 2017 ) , an iterative algorithm was proposed to generate perturbations and showed the existence of a universal ( imageagnostic ) adversarial perturbations . In Baluja & Fischer ( 2017 ) , a Transformation Network ( ATN ) was trained to generate adversarial examples without gradient involvement . The adversarial training and provable defense were proposed in Balunovic & Vechev ( 2020 ) to achieve both attack robustness and high accuracy . The capsule based reconstructive attack was proposed in Qin et al . ( 2020 ) to cause both misclassifications and reconstructive errors . Besides image classification , several attack methods were proposed for semantic segmentation , object detection and object tracking Jia et al . ( 2020 ) . In Fischer et al . ( 2017 ) ; Dong et al . ( 2019 ) , the classification based attacks were shown transferable to attack deep image segmentation results . The universal perturbations were demonstrated existing in Moosavi-Dezfooli et al . ( 2017 ) . Moreover , a Dense Adversary Generation ( DAG ) method was proposed Xie et al . ( 2017 ) for both semantic segmentation and object detection attacks . The general idea of natural image attacks was to iteratively generate perturbations based on the CNN gradients to maximize the network predictions of adversarial examples and the ground truth labels . This idea was also reflected in Finlayson et al . ( 2019 ) to show the medical attacks . Different from existing methods , we propose a stabilized regularization term to ensure the consistent generation of adversarial perturbations , which are effective for different types of medical image datasets . 2.2 DEEP MEDICAL IMAGE ANALYSIS . The deep Convolutional Neural Networks ( CNNs ) have been shown effective to automatically analyze medical images ( Litjens et al. , 2017 ; Razzak et al. , 2018 ) . The common CADx applications include classifying the stage of disease , the detection and segmentation of organs and lesions . Disease classification . Most medical systems formulate disease diagnosis as an image classification task . The types of diseases are predefined and each type corresponds to one category . During the classification process , there are single or multiple images as input for disease diagnosis . In Shen et al . ( 2015 ) , a multi-scale CNN was proposed to capture the feature representation of lung nodule patches for accurate classification . A multi-instance layer and a multi-scale layer were proposed in Li et al . ( 2019 ) to diagnose diabetic macular edema and myopic macular degeneration . Besides diagnosing the types of disease , existing medical systems were also able to predict the disease status ( Gulshan et al. , 2016 ) by empirical status categorization . Organ and lesion detection . The detection of organ and lesion is inspired by the object detection framework for natural images ( e.g. , Faster-RCNN ( Ren et al. , 2015 ) , FPN ( Lin et al. , 2017 ) , and Yolo ( Redmon et al. , 2016 ) ) . Besides , 3D spatial information of the medical data is explored in the 3D detection framework . In Ding et al . ( 2017 ) , a 3D-CNN classification approach was proposed to classify lung nodule candidates that were previously detected by the Faster-RCNN detector . The RPN ( Ren et al. , 2015 ) was extended in Liao et al . ( 2019 ) to become 3D-RPN for 3D proposal generation to detect lung nodules . Different from the 3D detection framework , a multi-scale booster was proposed in Shao et al . ( 2019 ) with channel and spatial attentions that were integrated into FPN for suspicious lesion detection in 2D CT slices . The detection methods based on 2D image input reduced heavy computational cost brought in the 3D inputs for 3D detection methods . Organ and lesion segmentation . The medical segmentation was significantly advanced via deep encoder-decoder structures ( e.g. , U-Net ( Ronneberger et al. , 2015 ) ) . This architecture contains a contracting path ( i.e. , encoder ) to capture global context and a symmetric expanding path ( i.e. , decoder ) to obtain precise localization . There were several works built upon U-Net . In Brosch et al . ( 2016 ) , skip connections were utilized in the first and last convolutional layers to segment lesions in brain . A V-Net was proposed in Milletari et al . ( 2016 ) to segment brain ’ s anatomical structures . It followed 3D U-Net structure consisting of 3D convolutional layers . A dual pathway and multi-scale 3D-CNN architecture was proposed in Kamnitsas et al . ( 2017 ) to generate both global and local lesion representations in the CNN for brain lesion segmentation . Existing medical methods mainly utilized deep encoder-decoder architectures for end-to-end segmentation of organs and lesions . As illustrated above , deep medical diagnosis systems differ much from the CNN architectures that are developed for natural images . Moreover , the variance of different modalities of medical images is significantly larger than that of natural images . Therefore , the adversarial attacks designed for natural images are not often effective in the medical domain . Nevertheless , the limitations that are brought by huge network and data variance are effectively solved via our stabilized medical attack . 3 PROPOSED METHOD . In this section , we illustrate the details of medical image attacks . We first show the objective function of SMIA that consists of a loss deviation term and a loss stabilization term . The loss deviation term produces the perturbation to decrease the image analysis performance , while the loss stabilization term is consistently updated during iterations and constrains these perturbations to low variance . Then , we analyze how SMIA affects the generated perturbations during iterative optimization from the perspective of KL-divergence . The analysis is followed by a visualization showing the variance and cosine distance of perturbations by utilizing the loss stabilization term .
The authors present a universal medical attack method that can consistently produce adversarial examples across several medical imaging domains. The authors achieve this by developing a novel objective function that includes two terms, which they refer to as stabilized medical attack (SMA). The first term is the loss deviation term, inspired by the conventional fast gradient sign method, which enlarges the difference between CNN predictions and ground truth labels. The second term, a regularizer, is the loss stabilization term that enforces consistent predictions between the adversarial image and the Gaussian smoothed version of the adversarial image. The authors then provide an insightful interpretation of their SMA loss via KL divergence. The derivation demonstrates that perturbations consistently move towards a fixed location in the SAM objective landscape during successive iterations of gradient ascent. This method increases perturbation robustness by overcoming huge variations that result from different types of medical imaging data. The authors provide an illustrative figure (Fig.2) to demonstrate that both the variance and direction of the adversarial perturbation remain stable and consistent across multiple iterations, compared to using the deviation loss alone. The authors then perform an ablation study to demonstrate the DEV + STA loss results in a significantly greater reduction in model performance across medical imaging datasets compared to DEV loss alone. Finally, compared to the state-of-the-art adversarial methods, SMA results in the greatest reduction in performance for all datasets.
SP:a7056ed3154309910542a66d52b98ea7a7e1ba4f
Stabilized Medical Image Attacks
1 INTRODUCTION . Computer Aided Diagnosis ( CADx ) has been widely applied in the medical screening process . The automatic diagnosis benefits doctors to efficiently obtain health status to avoid disease exacerbation . Recently , Convolutional Neural Networks ( CNNs ) have been utilized in CADx to improve the diagnosis accuracy . The discriminative representations improve the performance of medical image analysis including lesion localization , segmentation and disease classification . However , recent advances in adversarial examples have revealed that the deployed CADx systems are usually fragile to adversarial attacks ( Finlayson et al. , 2019 ) , e.g. , small perturbations applied to the input images can deceive CNNs to have opposite conclusions . As mentioned in Ma et al . ( 2020 ) , the vast amount of money in the healthcare economy may attract attackers to commit insurance fraud or false claims of medical reimbursement by manipulating medical reports . Moreover , image noise is a common issue during the data collection process and sometimes these noise perturbations could implicitly form adversarial attacks . For example , particle contamination of optical lens in dermoscopy and endoscopy and metal/respiratory artifacts of CT scans frequently deteriorate the quality of collected images . Therefore , there is a growing interest to investigate how medical diagnosis systems respond to adversarial attacks and what we can do to improve the robustness of the deployed systems . While recent studies of adversarial attacks mainly focus on natural images , the research of adversarial attacks in the medical image domain is desired as there are significant differences between ∗L.Gong and Y . Song are corresponding authors . The code is available at https : //github.com/ imogenqi/SMA two domains . Beyond regular RGB cameras , there are various types of medical imaging equipments ( e.g. , Computed Tomography ( CT ) scanners , ultrasound transducers and fundus cameras ) to generate dramatically different images . Fig . 1 shows three examples where an image captured from fundus camera is in ( a ) , an image captured from the CT scanner is in ( e ) and an endoscopic video frame is in ( i ) . As can be seen in the figure that these three images have little in common . The huge data variance across different modalities of medical images brings more challenges to develop a technology that works for all the modalities . In addition , existing investigations on medical adversarial attacks are limited . In Finlayson et al . ( 2019 ) , adversarial examples are shown to deteriorate the diagnosis accuracy of deep learning based medical systems . These medical attack methods are mainly based on those from natural images ( e.g. , Fast Gradient Sign Method ( FGSM ) ( Goodfellow et al. , 2014 ) and Project Gradient Descent ( PGD ) ( Madry et al. , 2017 ) , which are insufficiently developed for different types of medical data . As shown in Fig . 1 , the adversarial examples generated by FGSM and PGD do not consistently decrease the network ’ s performance in ( b ) , ( c ) , ( f ) , ( g ) , ( j ) and ( k ) . The data variance in ( a ) and ( e ) leads to the inconsistent attack results by existing methods . In this paper , we propose a medical image attack method to consistently produce adversarial perturbations that can fool deep medical diagnosis systems working with different medical data modalities . The perturbations are iteratively generated via taking partial derivatives of a well-defined objective function that is composed of a deviation loss term and a stabilized loss term with respect to the input . By maximizing the deviation loss term , the adversarial attack system enlarges the divergence between CNN predictions and the ground truth to have effective attack samples . To handle the aforementioned ubiquitous data noise issue in medical images , we propose a novel stabilization loss term as an extra regularization , which ensures a consistent deviation trajectory for the crafted attack samples . Meanwhile , the stabilization term avoids the local optima in the optimization process caused by the image noise . The proposed stabilization loss term is designed to measure the difference between two CNN predictions , where the first prediction is from the crafted adversarial sample and the second one is from the same sample processed with a Gaussian smoothing . Given an adversarial example A and its Gaussian smoothed result à , the loss stabilization term constrains the corresponding CNN predictions ( i.e. , f ( A ) and f ( à ) ) to be similar via a minimization process . The intuition from the scale space optimization ( Lindeberg , 1992 ) indicates that the minimization of f ( A ) and f ( à ) will exhaustively search the perturbation space to smooth the single spot for local optimum escape . We further analyze this stabilized loss term via KL-divergence and find that the CNN predictions are steered towards a fixed objective spot during iterations . This stabilization improves the attack effectiveness on different types of medical data including CT , fundus , and endoscopic images . We evaluate the proposed Stablized Medical Image Attack ( SMIA ) on several medical datasets ( APT , 2019 ; EAD , 2019 ; Kag , 2015 ) , including the recent COVID-19 ( COV , 2019 ) lung CT . Thorough evaluations demonstrate that the proposed method is effective to produce perturbations that decrease the prediction accuracy of different medical diagnosis systems . Our investigation provides a guidance for strengthening the robustness of these medical systems towards adversarial attacks . 2 RELATED WORK . In this section , we literately review existing adversarial attack methods on both natural and medical images . Meanwhile , we survey relevant medical image analysis tasks where SMIA is deployed . 2.1 ADVERSARIAL ATTACK . There are extensive investigations on adversarial attacks for natural image classifications . In Goodfellow et al . ( 2014 ) , FGSM was proposed to generate adversarial examples based on the CNN gradients . A DeepFool method was proposed in Moosavi-Dezfooli et al . ( 2016 ) to compute minimal perturbations based on classifier ’ s linearization . In Moosavi-Dezfooli et al . ( 2017 ) , an iterative algorithm was proposed to generate perturbations and showed the existence of a universal ( imageagnostic ) adversarial perturbations . In Baluja & Fischer ( 2017 ) , a Transformation Network ( ATN ) was trained to generate adversarial examples without gradient involvement . The adversarial training and provable defense were proposed in Balunovic & Vechev ( 2020 ) to achieve both attack robustness and high accuracy . The capsule based reconstructive attack was proposed in Qin et al . ( 2020 ) to cause both misclassifications and reconstructive errors . Besides image classification , several attack methods were proposed for semantic segmentation , object detection and object tracking Jia et al . ( 2020 ) . In Fischer et al . ( 2017 ) ; Dong et al . ( 2019 ) , the classification based attacks were shown transferable to attack deep image segmentation results . The universal perturbations were demonstrated existing in Moosavi-Dezfooli et al . ( 2017 ) . Moreover , a Dense Adversary Generation ( DAG ) method was proposed Xie et al . ( 2017 ) for both semantic segmentation and object detection attacks . The general idea of natural image attacks was to iteratively generate perturbations based on the CNN gradients to maximize the network predictions of adversarial examples and the ground truth labels . This idea was also reflected in Finlayson et al . ( 2019 ) to show the medical attacks . Different from existing methods , we propose a stabilized regularization term to ensure the consistent generation of adversarial perturbations , which are effective for different types of medical image datasets . 2.2 DEEP MEDICAL IMAGE ANALYSIS . The deep Convolutional Neural Networks ( CNNs ) have been shown effective to automatically analyze medical images ( Litjens et al. , 2017 ; Razzak et al. , 2018 ) . The common CADx applications include classifying the stage of disease , the detection and segmentation of organs and lesions . Disease classification . Most medical systems formulate disease diagnosis as an image classification task . The types of diseases are predefined and each type corresponds to one category . During the classification process , there are single or multiple images as input for disease diagnosis . In Shen et al . ( 2015 ) , a multi-scale CNN was proposed to capture the feature representation of lung nodule patches for accurate classification . A multi-instance layer and a multi-scale layer were proposed in Li et al . ( 2019 ) to diagnose diabetic macular edema and myopic macular degeneration . Besides diagnosing the types of disease , existing medical systems were also able to predict the disease status ( Gulshan et al. , 2016 ) by empirical status categorization . Organ and lesion detection . The detection of organ and lesion is inspired by the object detection framework for natural images ( e.g. , Faster-RCNN ( Ren et al. , 2015 ) , FPN ( Lin et al. , 2017 ) , and Yolo ( Redmon et al. , 2016 ) ) . Besides , 3D spatial information of the medical data is explored in the 3D detection framework . In Ding et al . ( 2017 ) , a 3D-CNN classification approach was proposed to classify lung nodule candidates that were previously detected by the Faster-RCNN detector . The RPN ( Ren et al. , 2015 ) was extended in Liao et al . ( 2019 ) to become 3D-RPN for 3D proposal generation to detect lung nodules . Different from the 3D detection framework , a multi-scale booster was proposed in Shao et al . ( 2019 ) with channel and spatial attentions that were integrated into FPN for suspicious lesion detection in 2D CT slices . The detection methods based on 2D image input reduced heavy computational cost brought in the 3D inputs for 3D detection methods . Organ and lesion segmentation . The medical segmentation was significantly advanced via deep encoder-decoder structures ( e.g. , U-Net ( Ronneberger et al. , 2015 ) ) . This architecture contains a contracting path ( i.e. , encoder ) to capture global context and a symmetric expanding path ( i.e. , decoder ) to obtain precise localization . There were several works built upon U-Net . In Brosch et al . ( 2016 ) , skip connections were utilized in the first and last convolutional layers to segment lesions in brain . A V-Net was proposed in Milletari et al . ( 2016 ) to segment brain ’ s anatomical structures . It followed 3D U-Net structure consisting of 3D convolutional layers . A dual pathway and multi-scale 3D-CNN architecture was proposed in Kamnitsas et al . ( 2017 ) to generate both global and local lesion representations in the CNN for brain lesion segmentation . Existing medical methods mainly utilized deep encoder-decoder architectures for end-to-end segmentation of organs and lesions . As illustrated above , deep medical diagnosis systems differ much from the CNN architectures that are developed for natural images . Moreover , the variance of different modalities of medical images is significantly larger than that of natural images . Therefore , the adversarial attacks designed for natural images are not often effective in the medical domain . Nevertheless , the limitations that are brought by huge network and data variance are effectively solved via our stabilized medical attack . 3 PROPOSED METHOD . In this section , we illustrate the details of medical image attacks . We first show the objective function of SMIA that consists of a loss deviation term and a loss stabilization term . The loss deviation term produces the perturbation to decrease the image analysis performance , while the loss stabilization term is consistently updated during iterations and constrains these perturbations to low variance . Then , we analyze how SMIA affects the generated perturbations during iterative optimization from the perspective of KL-divergence . The analysis is followed by a visualization showing the variance and cosine distance of perturbations by utilizing the loss stabilization term .
The authors proposed to introduce a combination of a loss deviation term and a loss stabilization term to generate more consistent adversarial perturbations on medical images. The loss deviation term increases the divergence between the CNN prediction of an adversarial example and its ground truth label. At the same time, the loss stabilization term ensures similar CNN predictions of this example and its smoothed input. The authors tested against 3 different medical image datasets obtained by different modalities. The proposed strategy seems straightforward and the benefits are clearly demonstrated with these three datasets.
SP:a7056ed3154309910542a66d52b98ea7a7e1ba4f
Revisiting Graph Neural Networks for Link Prediction
1 INTRODUCTION . Link prediction is to predict potential or missing links connecting pairwise nodes in a network . It has wide applications in various fields , such as friend recommendation in social networks ( Adamic & Adar , 2003 ) , movie recommendation in Netflix ( Bennett et al. , 2007 ) , protein-protein interaction prediction ( Qi et al. , 2006 ) , and knowledge graph completion ( Nickel et al. , 2015 ) , etc . Traditional link prediction approaches include heuristic methods , embedding methods , and featurebased methods . Heuristic methods compute some heuristic node similarity scores as the likelihood of links ( Liben-Nowell & Kleinberg , 2007 ) , such as common neighbors , preferential attachment ( Barabási & Albert , 1999 ) , and Katz index ( Katz , 1953 ) , which can be regarded as some predefined graph structure features . Embedding methods , including matrix factorization ( MF ) and Node2vec ( Grover & Leskovec , 2016 ) , learn free-parameter node embeddings from the observed network transductively , thus do not generalize to unseen nodes and networks . Feature-based methods only use explicit node features yet do not consider the graph structure . Recently , graph neural networks ( GNNs ) emerged to be powerful tools for learning over graph-structured data ( Scarselli et al. , 2009 ; Bruna et al. , 2013 ; Duvenaud et al. , 2015 ; Li et al. , 2015 ; Kipf & Welling , 2016a ; Niepert et al. , 2016 ; Dai et al. , 2016 ) , and have been successfully used in link prediction as well ( Kipf & Welling , 2016b ; Zhang & Chen , 2018 ; You et al. , 2019 ; Chami et al. , 2019 ; Li et al. , 2020 ) . There are two main types of GNN-based link prediction methods . One is Graph Autoencoder ( Kipf & Welling , 2016b ) , where a GNN is first applied to the entire network to learn an embedding vector for each node . Then the embeddings of the source and target nodes are aggregated to predict the target link . The second type is SEAL ( Zhang & Chen , 2018 ; Li et al. , 2020 ) , where an enclosing subgraph is extracted around each target link . Then the nodes in each enclosing subgraph are labeled differently according to their distances to the source and target nodes . Finally a GNN is applied to each enclosing subgraph to learn a link representation for link prediction . At first glance , both methods seem to learn graph structure features associated with the target link , and leverage these structure features for link prediction . However , as we will see , the two methods have fundamentally different power in terms of learning the structural representations of links . 𝑣2 𝑣1 𝑣3 Figure 1 : The structural roles of link ( v1 , v2 ) and link ( v1 , v3 ) are different , but GAE will assign equal probabilities to them . We first show that by individually learning source and target node embeddings , GAE methods can not differentiate links with different structural roles . To intuitively understand this , we give an example in Figure 1 . In this graph , nodes v2 and v3 have the same structural roles ( symmetric/isomorphic to each other ) . A GAE will learn the same node embeddings for v2 and v3 , thus giving the same predicted probabilities for link ( v1 , v2 ) and link ( v1 , v3 ) . However , the structural roles of link ( v1 , v2 ) and link ( v1 , v3 ) are apparently different – v1 intuitively should have unequal probabil- ities connecting to v2 and v3 . Next , we propose a labeling trick , which gives a label to each node as its additional feature , where the source and target nodes are labeled differently from the rest . We show that combined with the labeling trick , a sufficiently expressive GNN can learn the same representations for two links if and only if their structural roles are the same within the graph . This way , ( v1 , v2 ) and ( v1 , v3 ) will be predicted differently in Figure 1 . We further show that SEAL is such an example . Finally , we give a more practical definition of isomorphism , called local isomorphism , which defines two nodes/links as isomorphic if their local neighborhood subgraphs are isomorphic . We argue that GNNs for link prediction should target on local-isomorphism-discriminating . We conduct a thorough comparison among different link prediction methods , including SEAL and various GAE and embedding methods , on the recent large-scale Open Graph Benchmark ( OGB ) datasets ( Hu et al. , 2020 ) . We show that SEAL with the labeling trick has up to 195 % higher Hits @ 100 than GAE methods , achieving new state-of-the-art results on 3 out of 4 datasets . 2 PRELIMINARIES . In this section , we formally define the notions of graph , permutation , isomorphism , and GNN . Definition 1 . ( Graph ) . We consider an undirected graph G = ( V , E , A ) , where V = { 1 , 2 , . . . , n } is the set of n vertices , E ⊆ V ×V is the set of edges , and A ∈ Rn×n×k contains the node and edge features with its diagonal components Ai , i , : denoting node attributes and off-diagonal components Ai , j , : denoting edge attributes . We further use A ∈ { 0 , 1 } n×n to denote the adjacency matrix of G with Ai , j = 1 iff ( i , j ) ∈ E. If there are no node/edge features , we let A = A . Otherwise , A can be regarded as the first slice of A , i.e. , A = A : , : ,1 . Definition 2 . ( Permutation ) A node permutation π is a bijective mapping from { 1 , 2 , . . . , n } to { 1 , 2 , . . . , n } . All n ! possible π ’ s constitute the permutation group Πn . We define π ( S ) = { π ( i ) |i ∈ S } when S is a subset of { 1 , 2 , . . . , n } . We further define the permutation of A as π ( A ) , where π ( A ) π ( i ) , π ( j ) , : = Ai , j , : . In other words , π ( A ) i , j , : = Aπ−1 ( i ) , π−1 ( j ) , : . Definition 3 . ( Set isomorphism ) Given two n-node graphs G = ( V , E , A ) , G′ = ( V ′ , E′ , A′ ) , and two node sets S ⊆ V , S′ ⊆ V ′ , we say ( S , A ) and ( S′ , A′ ) are isomorphic ( denoted by ( S , A ) ' ( S′ , A′ ) ) if ∃π ∈ Πn such that S = π ( S′ ) and A = π ( A′ ) . When ( V , A ) ' ( V ′ , A′ ) , we say two graphs G and G′ are isomorphic ( abbreviated as A ' A′ because V = π ( V ′ ) for any π ) . Note that set isomorphism is more strict than graph isomorphism , because it not only requires graph isomorphism , but also requires the permutation maps a specific subset S to another subset S′ . When S ⊂ V and S′ ⊂ V ′ , we are often more concerned with the case of A = A′ , where we are to find isomorphic node sets in the same graph ( automorphism ) . For example , when S = { i } , S′ = { j } ( single node case ) and ( i , A ) , ( j , A ) are isomorphic , it means i and j are on the same orbit of graph A ( i.e. , they have symmetric positions/same structural roles within the graph ) . An example is v2 and v3 in Figure 1 . Definition 4 . ( Invariant function ) A function f defined over the space of ( S , A ) is invariant if ∀π ∈ Πn , f ( S , A ) = f ( π ( S ) , π ( A ) ) . Definition 5 . ( GNN ) A GNN is an invariant function mapping from the space of ( S , A ) to Rd . More specifically , a GNN first performs multiple invariant message passing operations to compute a node embedding zi = GNN ( i , A ) for all i ∈ S , and then performs a set aggregation ( pooling ) over { zi|i ∈ S } , written as AGG ( { zi|i ∈ S } ) , as the set S ’ s representation GNN ( S , A ) . Note that , when |S| = 1 , the set aggregation is often an identity mapping . In graph classification ( S = V ) , we use a graph pooling layer over node embeddings to compute the graph representation . 3 GAE AND STRUCTURAL LINK REPRESENTATION . In this section , we review how GAE methods predict links , and show that simply aggregating node embeddings learned by a GNN can not lead to effective link representations . We use A to denote the incomplete network to perform link prediction . 3.1 GAE FOR LINK PREDICTION . Graph Autoencoder ( GAE ) methods ( Kipf & Welling , 2016b ) first use a GNN to compute a node embedding zi for each node i , and then use f ( zi , zj ) to predict the link ( i , j ) : Âi , j = f ( zi , zj ) , where zi = GNN ( i , A ) , zj = GNN ( j , A ) ( 1 ) where Âi , j is the predicted score for link ( i , j ) . The model is trained to maximize the likelihood of reconstructing the true adjacency matrix . The original GAE uses a two-layer GCN ( Kipf & Welling , 2016a ) as the GNN , and let f ( zi , zj ) : = σ ( z > i zj ) . In principle , we can replace GCN with any message passing neural network ( Gilmer et al. , 2017 ) , and use an MLP over the aggregation of zi and zj as the f ( zi , zj ) . Popular aggregation functions include concatenation , mean and Hadamard product , etc . In the following , we will use GAE to denote a general class of GNN-based link prediction methods , without differentiating the specific choices of GNN and f . 3.2 GAE CAN LEARN STRUCTURAL NODE REPRESENTATIONS . Following ( Srinivasan & Ribeiro , 2020 ; Li et al. , 2020 ) , we first define most expressive structural representations for nodes and links . Then we relate them to GAE-learned node embeddings and show that GAE is not capable of learning structural link representations . Definition 6 . Given an invariant function Γ ( · ) , Γ ( S , A ) is a most expressive structural representation for ( S , A ) if ∀ ( S , A , S′ , A′ ) , Γ ( S , A ) = Γ ( S′ , A′ ) ⇔ ( S , A ) ' ( S′ , A′ ) . For simplicity , we will briefly use “ structural representation ” to denote most expressive structural representation in the rest of the paper . We will omit A if it is clear from context . We call Γ ( i , A ) a structural node representation for i , and call Γ ( { i , j } , A ) a structural link representation for ( i , j ) . The above definition indicates that two node sets have the same structural representations if and only if they are isomorphic to each other . In the same graph A , structural representations uniquely mark the structural roles of nodes or node sets . This is in contrast to positional node embeddings such as DeepWalk ( Perozzi et al. , 2014 ) and matrix factorization ( Mnih & Salakhutdinov , 2008 ) , where two isomorphic nodes can have different node embeddings ( Ribeiro et al. , 2017 ) . So why do we need to define structural representations ? From a node classification point of view , it is because two isomorphic nodes in a network are perfectly symmetric to each other , and should be indistinguishable using any labeling functions on graphs ( i.e. , they should have the same ground truth y ) . Learning a structural node representation can guarantee that isomorphic nodes are always classified into the same class . Then , a natural question to ask is , do GNNs learn structural node representations ? The answer is no . Recall that ( i , A ) ' ( j , A′ ) ⇒ A ' A′ . If a GNN can learn structural node representations , we can always use it for graph isomorphism test by checking whether there exist two nodes in two graphs sharing the same structural node representation . In fact , existing GNNs ’ graph discriminating power is bounded by the Weisfeiler-Lehman ( WL ) test ( Morris et al. , 2019 ; Maron et al. , 2019 ) , which provably fails to distinguish certain non-isomorphic graphs ( Cai et al. , 1992 ) . Despite this , GNNs/WL are still powerful enough to learn representations that can distinguish almost all nonisomorphic nodes and graphs ( Babai & Kucera , 1979 ) . For easy analysis , we assume there exists a node-most-expressive GNN that can output structural node representations thus able to distinguish all non-isomorphic nodes . Despite this , the techniques we will present are not limited to node-most-expressive GNNs , but also benefit practical GNNs . Definition 7 . A GNN is node-most-expressive if ∀ ( i , A , j , A′ ) , GNN ( i , A ) = GNN ( j , A′ ) ⇔ ( i , A ) ' ( j , A′ ) . Recall that GAE first uses a GNN to compute node embeddings . Therefore , GAE with a node-mostexpressive GNN is able to leverage structural node representations for link prediction . 3.3 GAE CAN NOT LEARN STRUCTURAL LINK REPRESENTATIONS The next question to ask is whether GAE learns structural link representations . That is , does the aggregation of structural node representations of i and j result in a structural link representation of ( i , j ) ? The answer is no , as shown in previous works ( Srinivasan & Ribeiro , 2020 ; Zhang & Chen , 2020 ) . We have also illustrated it in the introduction . In Figure 1 , we have two isomorphic nodes v2 and v3 , thus v2 and v3 will have the same structural node representation . By aggregating structural node representations as link representations , GAE will assign ( v1 , v2 ) and ( v1 , v3 ) the same link representation and predict them to have equal probabilities of forming a link . However , ( v1 , v2 ) and ( v1 , v3 ) apparently have different structural link representations , which indicates that Proposition 1 . Even with a node-most-expressive GNN , GAE can not learn structural link representations . The root cause of this problem is that GAE learns representations for the source and target nodes individually , without considering their relative positions and associations . For example , although v2 and v3 are perfectly symmetric in the graph , when considering the source node v1 to predict link from , v2 and v3 ’ s positions w.r.t . v1 are no longer symmetric .
The paper focuses on the link prediction task for graph neural networks. More specifically, it compares GAE and SEAL by providing theoretical evidence why GAE is not able to learn structural link representations, which as a result leads to suboptimal performance in the link prediction task. The paper also introduces a labeling trick that can help GNNs to learn structural link representations.
SP:ff7cdd0d7c011a59c7fc3088f7dbd6145fc3ca72
Revisiting Graph Neural Networks for Link Prediction
1 INTRODUCTION . Link prediction is to predict potential or missing links connecting pairwise nodes in a network . It has wide applications in various fields , such as friend recommendation in social networks ( Adamic & Adar , 2003 ) , movie recommendation in Netflix ( Bennett et al. , 2007 ) , protein-protein interaction prediction ( Qi et al. , 2006 ) , and knowledge graph completion ( Nickel et al. , 2015 ) , etc . Traditional link prediction approaches include heuristic methods , embedding methods , and featurebased methods . Heuristic methods compute some heuristic node similarity scores as the likelihood of links ( Liben-Nowell & Kleinberg , 2007 ) , such as common neighbors , preferential attachment ( Barabási & Albert , 1999 ) , and Katz index ( Katz , 1953 ) , which can be regarded as some predefined graph structure features . Embedding methods , including matrix factorization ( MF ) and Node2vec ( Grover & Leskovec , 2016 ) , learn free-parameter node embeddings from the observed network transductively , thus do not generalize to unseen nodes and networks . Feature-based methods only use explicit node features yet do not consider the graph structure . Recently , graph neural networks ( GNNs ) emerged to be powerful tools for learning over graph-structured data ( Scarselli et al. , 2009 ; Bruna et al. , 2013 ; Duvenaud et al. , 2015 ; Li et al. , 2015 ; Kipf & Welling , 2016a ; Niepert et al. , 2016 ; Dai et al. , 2016 ) , and have been successfully used in link prediction as well ( Kipf & Welling , 2016b ; Zhang & Chen , 2018 ; You et al. , 2019 ; Chami et al. , 2019 ; Li et al. , 2020 ) . There are two main types of GNN-based link prediction methods . One is Graph Autoencoder ( Kipf & Welling , 2016b ) , where a GNN is first applied to the entire network to learn an embedding vector for each node . Then the embeddings of the source and target nodes are aggregated to predict the target link . The second type is SEAL ( Zhang & Chen , 2018 ; Li et al. , 2020 ) , where an enclosing subgraph is extracted around each target link . Then the nodes in each enclosing subgraph are labeled differently according to their distances to the source and target nodes . Finally a GNN is applied to each enclosing subgraph to learn a link representation for link prediction . At first glance , both methods seem to learn graph structure features associated with the target link , and leverage these structure features for link prediction . However , as we will see , the two methods have fundamentally different power in terms of learning the structural representations of links . 𝑣2 𝑣1 𝑣3 Figure 1 : The structural roles of link ( v1 , v2 ) and link ( v1 , v3 ) are different , but GAE will assign equal probabilities to them . We first show that by individually learning source and target node embeddings , GAE methods can not differentiate links with different structural roles . To intuitively understand this , we give an example in Figure 1 . In this graph , nodes v2 and v3 have the same structural roles ( symmetric/isomorphic to each other ) . A GAE will learn the same node embeddings for v2 and v3 , thus giving the same predicted probabilities for link ( v1 , v2 ) and link ( v1 , v3 ) . However , the structural roles of link ( v1 , v2 ) and link ( v1 , v3 ) are apparently different – v1 intuitively should have unequal probabil- ities connecting to v2 and v3 . Next , we propose a labeling trick , which gives a label to each node as its additional feature , where the source and target nodes are labeled differently from the rest . We show that combined with the labeling trick , a sufficiently expressive GNN can learn the same representations for two links if and only if their structural roles are the same within the graph . This way , ( v1 , v2 ) and ( v1 , v3 ) will be predicted differently in Figure 1 . We further show that SEAL is such an example . Finally , we give a more practical definition of isomorphism , called local isomorphism , which defines two nodes/links as isomorphic if their local neighborhood subgraphs are isomorphic . We argue that GNNs for link prediction should target on local-isomorphism-discriminating . We conduct a thorough comparison among different link prediction methods , including SEAL and various GAE and embedding methods , on the recent large-scale Open Graph Benchmark ( OGB ) datasets ( Hu et al. , 2020 ) . We show that SEAL with the labeling trick has up to 195 % higher Hits @ 100 than GAE methods , achieving new state-of-the-art results on 3 out of 4 datasets . 2 PRELIMINARIES . In this section , we formally define the notions of graph , permutation , isomorphism , and GNN . Definition 1 . ( Graph ) . We consider an undirected graph G = ( V , E , A ) , where V = { 1 , 2 , . . . , n } is the set of n vertices , E ⊆ V ×V is the set of edges , and A ∈ Rn×n×k contains the node and edge features with its diagonal components Ai , i , : denoting node attributes and off-diagonal components Ai , j , : denoting edge attributes . We further use A ∈ { 0 , 1 } n×n to denote the adjacency matrix of G with Ai , j = 1 iff ( i , j ) ∈ E. If there are no node/edge features , we let A = A . Otherwise , A can be regarded as the first slice of A , i.e. , A = A : , : ,1 . Definition 2 . ( Permutation ) A node permutation π is a bijective mapping from { 1 , 2 , . . . , n } to { 1 , 2 , . . . , n } . All n ! possible π ’ s constitute the permutation group Πn . We define π ( S ) = { π ( i ) |i ∈ S } when S is a subset of { 1 , 2 , . . . , n } . We further define the permutation of A as π ( A ) , where π ( A ) π ( i ) , π ( j ) , : = Ai , j , : . In other words , π ( A ) i , j , : = Aπ−1 ( i ) , π−1 ( j ) , : . Definition 3 . ( Set isomorphism ) Given two n-node graphs G = ( V , E , A ) , G′ = ( V ′ , E′ , A′ ) , and two node sets S ⊆ V , S′ ⊆ V ′ , we say ( S , A ) and ( S′ , A′ ) are isomorphic ( denoted by ( S , A ) ' ( S′ , A′ ) ) if ∃π ∈ Πn such that S = π ( S′ ) and A = π ( A′ ) . When ( V , A ) ' ( V ′ , A′ ) , we say two graphs G and G′ are isomorphic ( abbreviated as A ' A′ because V = π ( V ′ ) for any π ) . Note that set isomorphism is more strict than graph isomorphism , because it not only requires graph isomorphism , but also requires the permutation maps a specific subset S to another subset S′ . When S ⊂ V and S′ ⊂ V ′ , we are often more concerned with the case of A = A′ , where we are to find isomorphic node sets in the same graph ( automorphism ) . For example , when S = { i } , S′ = { j } ( single node case ) and ( i , A ) , ( j , A ) are isomorphic , it means i and j are on the same orbit of graph A ( i.e. , they have symmetric positions/same structural roles within the graph ) . An example is v2 and v3 in Figure 1 . Definition 4 . ( Invariant function ) A function f defined over the space of ( S , A ) is invariant if ∀π ∈ Πn , f ( S , A ) = f ( π ( S ) , π ( A ) ) . Definition 5 . ( GNN ) A GNN is an invariant function mapping from the space of ( S , A ) to Rd . More specifically , a GNN first performs multiple invariant message passing operations to compute a node embedding zi = GNN ( i , A ) for all i ∈ S , and then performs a set aggregation ( pooling ) over { zi|i ∈ S } , written as AGG ( { zi|i ∈ S } ) , as the set S ’ s representation GNN ( S , A ) . Note that , when |S| = 1 , the set aggregation is often an identity mapping . In graph classification ( S = V ) , we use a graph pooling layer over node embeddings to compute the graph representation . 3 GAE AND STRUCTURAL LINK REPRESENTATION . In this section , we review how GAE methods predict links , and show that simply aggregating node embeddings learned by a GNN can not lead to effective link representations . We use A to denote the incomplete network to perform link prediction . 3.1 GAE FOR LINK PREDICTION . Graph Autoencoder ( GAE ) methods ( Kipf & Welling , 2016b ) first use a GNN to compute a node embedding zi for each node i , and then use f ( zi , zj ) to predict the link ( i , j ) : Âi , j = f ( zi , zj ) , where zi = GNN ( i , A ) , zj = GNN ( j , A ) ( 1 ) where Âi , j is the predicted score for link ( i , j ) . The model is trained to maximize the likelihood of reconstructing the true adjacency matrix . The original GAE uses a two-layer GCN ( Kipf & Welling , 2016a ) as the GNN , and let f ( zi , zj ) : = σ ( z > i zj ) . In principle , we can replace GCN with any message passing neural network ( Gilmer et al. , 2017 ) , and use an MLP over the aggregation of zi and zj as the f ( zi , zj ) . Popular aggregation functions include concatenation , mean and Hadamard product , etc . In the following , we will use GAE to denote a general class of GNN-based link prediction methods , without differentiating the specific choices of GNN and f . 3.2 GAE CAN LEARN STRUCTURAL NODE REPRESENTATIONS . Following ( Srinivasan & Ribeiro , 2020 ; Li et al. , 2020 ) , we first define most expressive structural representations for nodes and links . Then we relate them to GAE-learned node embeddings and show that GAE is not capable of learning structural link representations . Definition 6 . Given an invariant function Γ ( · ) , Γ ( S , A ) is a most expressive structural representation for ( S , A ) if ∀ ( S , A , S′ , A′ ) , Γ ( S , A ) = Γ ( S′ , A′ ) ⇔ ( S , A ) ' ( S′ , A′ ) . For simplicity , we will briefly use “ structural representation ” to denote most expressive structural representation in the rest of the paper . We will omit A if it is clear from context . We call Γ ( i , A ) a structural node representation for i , and call Γ ( { i , j } , A ) a structural link representation for ( i , j ) . The above definition indicates that two node sets have the same structural representations if and only if they are isomorphic to each other . In the same graph A , structural representations uniquely mark the structural roles of nodes or node sets . This is in contrast to positional node embeddings such as DeepWalk ( Perozzi et al. , 2014 ) and matrix factorization ( Mnih & Salakhutdinov , 2008 ) , where two isomorphic nodes can have different node embeddings ( Ribeiro et al. , 2017 ) . So why do we need to define structural representations ? From a node classification point of view , it is because two isomorphic nodes in a network are perfectly symmetric to each other , and should be indistinguishable using any labeling functions on graphs ( i.e. , they should have the same ground truth y ) . Learning a structural node representation can guarantee that isomorphic nodes are always classified into the same class . Then , a natural question to ask is , do GNNs learn structural node representations ? The answer is no . Recall that ( i , A ) ' ( j , A′ ) ⇒ A ' A′ . If a GNN can learn structural node representations , we can always use it for graph isomorphism test by checking whether there exist two nodes in two graphs sharing the same structural node representation . In fact , existing GNNs ’ graph discriminating power is bounded by the Weisfeiler-Lehman ( WL ) test ( Morris et al. , 2019 ; Maron et al. , 2019 ) , which provably fails to distinguish certain non-isomorphic graphs ( Cai et al. , 1992 ) . Despite this , GNNs/WL are still powerful enough to learn representations that can distinguish almost all nonisomorphic nodes and graphs ( Babai & Kucera , 1979 ) . For easy analysis , we assume there exists a node-most-expressive GNN that can output structural node representations thus able to distinguish all non-isomorphic nodes . Despite this , the techniques we will present are not limited to node-most-expressive GNNs , but also benefit practical GNNs . Definition 7 . A GNN is node-most-expressive if ∀ ( i , A , j , A′ ) , GNN ( i , A ) = GNN ( j , A′ ) ⇔ ( i , A ) ' ( j , A′ ) . Recall that GAE first uses a GNN to compute node embeddings . Therefore , GAE with a node-mostexpressive GNN is able to leverage structural node representations for link prediction . 3.3 GAE CAN NOT LEARN STRUCTURAL LINK REPRESENTATIONS The next question to ask is whether GAE learns structural link representations . That is , does the aggregation of structural node representations of i and j result in a structural link representation of ( i , j ) ? The answer is no , as shown in previous works ( Srinivasan & Ribeiro , 2020 ; Zhang & Chen , 2020 ) . We have also illustrated it in the introduction . In Figure 1 , we have two isomorphic nodes v2 and v3 , thus v2 and v3 will have the same structural node representation . By aggregating structural node representations as link representations , GAE will assign ( v1 , v2 ) and ( v1 , v3 ) the same link representation and predict them to have equal probabilities of forming a link . However , ( v1 , v2 ) and ( v1 , v3 ) apparently have different structural link representations , which indicates that Proposition 1 . Even with a node-most-expressive GNN , GAE can not learn structural link representations . The root cause of this problem is that GAE learns representations for the source and target nodes individually , without considering their relative positions and associations . For example , although v2 and v3 are perfectly symmetric in the graph , when considering the source node v1 to predict link from , v2 and v3 ’ s positions w.r.t . v1 are no longer symmetric .
This paper provides theoretical analysis of graph neural networks for link prediction, following a number of recent papers that have developed the theoretical understanding of graph neural networks. For example, the work of Xu et al (ICLR 2019) draws on the Weisfeiler-Lehman graph isomorphism test to develop a theoretical framework in which to analyse the expressive power of Graph Neural Networks. In particular, they show that architectural features in common graph neural network models limit the expressivity with respect to problems of node and graph classification and show how one can construct graph neural networks whose expressivity matches the Weisfeiler-Lehmam test. In more recent work, Li et al (NeurIPS 2020) show that augmenting node features with a "distance encoding" enables a graph neural network to distinguish node sets in cases where the Weisfeiler-Lehman test fails. This submission is perhaps most closely related to this recent paper of Li et al.
SP:ff7cdd0d7c011a59c7fc3088f7dbd6145fc3ca72
Symbol-Shift Equivariant Neural Networks
Neural networks have been shown to have poor compositionality abilities : while they can produce sophisticated output given sufficient data , they perform patchy generalization and fail to generalize to new symbols ( e.g . switching a name in a sentence by a less frequent one or one not seen yet ) . In this paper , we define a class of models whose outputs are equivariant to entity permutations ( an analog being convolution networks whose outputs are invariant through translation ) without requiring to specify or detect entities in a pre-processing step . We then show how two question-answering models can be made robust to entity permutation using a novel differentiable hybrid semantic-symbolic representation . The benefits of this approach are demonstrated on a set of synthetic NLP tasks where sample complexity and generalization are significantly improved even allowing models to generalize to words that are never seen in the training set . When using only 1K training examples for bAbi , we obtain a test error of 1.8 % and fail only one task while the best results reported so far obtained an error of 9.9 % and failed 7 tasks . 1 INTRODUCTION . Previous work have shown how neural networks fail to generalize to new symbols ( Lake & Baroni , 2018 ; Sinha et al. , 2019 ; Hupkes et al. , 2019 ) . In particular , Lake & Baroni ( 2018 ) showed that seq2seq models are able to perfectly learn a set of rules given enough data , however they fail to generalize these learned rules to new symbols . 4 ( original ) 100 1000 10000 Number of names 0 20 40 60 80 100 Te st e rro r 5 % error limit MN SMN TPR STPR tensor product RNN ( TPR ) ( Schlag & Schmidhuber , 2018 ) dramatically drops as the number of names increases in contrast to their symbolic counter-part SMN and STPR proposed in this paper . Both models reach low error well below the 5 % limit even when the number of names and vocabulary become considerably larger than the original task . The main contribution of this paper is the proposal of a hybrid semantic/symbolic representation that is equivariant to entity permutation . The main advantage and novelty of our approach is that entities are not required to be identified in advance as we rely solely on differentiation to determine whether a word acts like an entity . We show how to extend two question-answering models to handle this hybrid representation and demonstrate in extensive experiments the benefit of such an approach : the sample-complexity is significantly improved , better compositionality is obtained and symbolic models reach better accuracy on the studied tasks in particular when being trained with less training data . The paper starts by reviewing related works , we then introduce formally what it means to permute entities . We then define layers that are robust to such perturbation and show how two recent questionanswering models can be adapted in this context . Finally , experiments are conducted to assess the benefits of our method . 2 RELATED WORKS . Improving compositionality of neural networks has been an important on-going effort in the past years . The SCAN dataset proposed from Lake & Baroni ( 2018 ) initially showed how standard neural networks baselines can fail to generalize to new symbols when learning a set of artificially constructed rules . Several approaches were proposed to solve this issue . For instance , Lake ( 2019 ) designed meta-learning episodes that led the model to solve the task , ( Nye et al. , 2020 ) showed how one could infer symbolic neural programs with a similar meta-learning procedure . Alternatively , Gordon et al . ( 2020 ) proposed to design an equivariant model ( a model whose latent representations are unchanged when permuting symbols ) . A common limit of those approaches is that they require specifying which words are symbols in advance ( Lake ( 2019 ) ; Nye et al . ( 2020 ) also requires substantial amount of supervision and designing meta-episodes ) . An exception is Russin et al . ( 2019 ) which proposed to decompose syntax and semantic for SCAN . None of those approaches can generalize to arbitrary large amount of entities or entities not seen in the training as the one we propose . The problem of compositionality becomes much easier if symbols ( or entities ) are detected beforehand . For instance , Li et al . ( 2015 ) showed that replacing entities by dedicated token placeholders leads to significant improvement in question answering . The same approach has also been applied in Machine Translation and Data to Text generation Luong et al . ( 2015 ) ; Serban et al . ( 2016 ) ; Lebret et al . ( 2016 ) , to enable sequence to sequence models to generalize to unseen words at inference time . While specifying entities in advance ( or detecting them in a pre-processing step with Named-entity recognition ( Marsh & Perzanowski , 1998 ) ) before applying a model may give compositionality , we would clearly want instead models to be able to infer automatically whether a word should behave as a symbol or not . While positional encoding ( Graves et al. , 2014 ; Vaswani et al. , 2017 ) may give some compositionality - as it allows to reason over positions - this solution is not practical for language as inter-word distances are not fixed . For instance the distance between a noun and its verb varies and positional embedding is not enough to achieve compositionality ( Hupkes et al. , 2019 ) . An interesting line of research have been the study of equivariant models whose representations are invariant ( or equivariant ) to symmetries present in the data ( Zaheer et al. , 2017 ; Ravanbakhsh et al. , 2017 ) . Adding invariance to data symmetries has been theoretically shown to drastically reduces sample complexity ( Sannai & Imaizumi , 2019 ) . For instance , convolution neural networks require significantly less training data and achieve much better performance than a MLP as they are invariant to image translation . Gordon et al . ( 2020 ) proposed the first NLP model provably capable of handling symmetries between symbols albeit requiring the need to specify such symmetries in advance . Tensor product representation ( TPR ) Smolensky ( 1990 ) allows to stores complex relations between value and variables with distributed representations and offer some compositionality . Recently , Schlag & Schmidhuber ( 2018 ) proposed an architecture able to learn TPR parameters by differentiation and obtained state-of-the-art results for bAbi at the time of publishing . However , the compositionality of the proposed approach is limited ( as shown in Fig . 1 ) by the fact that every entity needs to be seen sufficiently many times so that a proper entity vector is found , in addition the model has been shown to learn orthogonal representation for entities which requires as many hidden dimensions as the total number of entities . Finally , the VARS approach ( Vankov & Bowers , 2020 ) consists in outputting a one-hot vector representing a symbolic variable that is randomly assigned to different positions during training to enforce compositionality . While this approach grants some compositionality , the approach is limited as one must draw symbol permutations so that each object is seen in all possible one-hot values . In addition , one must specify in advance which object or word behaves as a symbol and the method only support symbolic output and can not represent symbolic inputs nor perform computation with hybrid representation as we propose ( combining semantic and symbolic representation ) . 3 SYMBOL-SHIFT EQUIVARIANCE . When we learn to answer “ John ” given a specific context we would like to be able to answer “ Sasha ” if both names were permuted in the context . In what follows , we introduce the notion of symbolshift equivariance : e.g . a condition restricting possible permutations as some permutations may perturb the sentence grammar ( permuting “ John ” by “ why ” ) or cause ambiguity ( permuting “ John ” by “ Mary ” if the question involves a gender ) . We assume all words are taken from a vocabulary V which is a discrete set of n words . We are interested in providing an answer a ∈ V given a context consisting in a question q = [ q1 , . . . , qnq ] ∈ V nq and a list of sentences ( or stories ) x = [ x1 , . . . , xT ] with xi = [ xi1 , . . . , xini ] ∈ V ni . We denote X = ( x , q ) and Φ ( X ) = a the function that predicts the answer a ∈ V given the context X = ( x , q ) ∈ X . Let Γ : V → V a word permutation . Given a sequence [ y1 , . . . , yn ] , we define Γ ( y ) = [ Γ ( y1 ) , . . . , Γ ( yn ) ] where the permutation is applied to each word in the sequence , similarly , Γ ( X ) = ( [ Γ ( x1 ) , . . . , Γ ( xT ) ] , Γ ( q ) ) . Assuming each word has an associated vector parameter , we say that a permutation is a symbol-shift if it preserves vectors parameters . For instance in Figure 2 , a map permuting “ John ” with “ Sasha ” is a symbol-shift as both words share the same parameters but a map permuting “ John ” and “ lemon ” is not . Formally , assuming each word i ≤ n has an associated set of parameters vi ∈ RD , we say that a permutation Γ : V → V is a symbol-shift if vi = vΓ ( i ) for all i ≤ n. We are now ready to define symbol-shift equivariance . A critical advantage is that we do not need to specify symmetries in advance between entities , as we instead rely on vector semantics whose embeddings will be learned end-to-end . Definition 1 . Let Φ : X → V a function mapping a context to an answer . We say that Φ is symbolshift equivariant if for any symbol-shift Γ and for any X ∈ X , Φ ( Γ ( X ) ) = Γ ( Φ ( X ) ) 4 SYMBOLIC QUESTION ANSWERING . In this section , we show how to define symbol-shift-equivariant models . The main idea consists of concatenating two representations , a standard semantic representation in Rd as well as a symbolic representation in Rm where m denotes the number of distinct words in the stories and question . The symbolic representation will be made such that for i ≤ m , the i-th component of the symbolic representation corresponds to the i-th word appearing in the context . For instance in Figure 3 , there arem = 5 words in the context and “ apple ” is the fourth word by order of appearance so its symbolic embedding is the fourth one-hot vector . The model symbolic output has larger probability for the fourth word of the context which is dereferenced to “ Apple ” . We now describe formally how the symbolic representations are constructed and how we perform linear transformation and projection back to the original vocabulary . Finally , we derive the symbolic counter-part of Memory-Networks and TPR models that will be proved to be symbol-shift equivariant . Mapping words into and from symbolic representations . In each input example , the set of words present in the stories x and the question q is denoted as CX = { q } ∪ { { xi } , i ≤ T } and we letm = |CX | be the number of distinct words in the context . To project words to its symbolic representation Rm , we represent each unique word by a one-hot vector representing the position of its first appearance , using the bijection ϕX : CX → [ 1 , m ] 1 . To dereference a symbolic representation in Rm back to its vocabulary id , we define the matrixBϕ ∈ Rn×m that maps one-hot vectors of symbolic representations to one-hot vectors in Rn representing the word id in the vocabulary as shown in Fig . 3 , such that Bϕemj = e n ϕ−1 ( j ) for j ≤ m where ekl ∈ Rk denotes the l-th one-hot vector in Rk 2 . Note given a one-hot symbolic representation p̃ ∈ Rm , the i-th coordinates of Bϕp̃ ∈ Rn is given by : [ Bϕp̃ ] i = { p̃ϕ ( i ) if i ∈ C 0 else ( 1 ) hence Bϕ allows to dereference a symbolic representation p̃ ∈ Rm to a word vector Bϕp̃ ∈ Rn . Hybrid semantic-symbolic embeddings . We embed words with the concatenation of a standard semantic word embedding as well as a symbolic embedding respectively parametrized byA ∈ Rd×n and α ∈ [ 0 , 1 ] n. The semantic embedding maps a word x ∈ [ 1 , n ] to Aex ∈ Rd . While the symbolic embedding of x consists in the one-hot vector of the order of appearance of the word x in its context multiplied by a learned parameter . More precisely , it is defined as αxeϕ ( x ) ∈ Rm , where αx is an output of a sigmoid unit on a learnable parameter , i.e . 0 < αx < 1 , that indicates how much each word should behave as a symbol 3 , and eϕ ( x ) is the one-hot vector of the order of appearance of the word x in its context . The final embedding of a word of x then consists in the concatenation of the semantic and symbolic parts : x 7→ Aex ⊕ αxeϕ ( x ) ∈ Rd+m , ( 2 ) where ⊕ denotes the concatenation operator . Note that all parameters are differentiable allowing the model to learn both word semantic and how much each word should behave as a symbol . The symbolic part will be shown to allow the model to be robust to symbol permutation while being able to represent an arbitrary number of symbols and generalize to new ones . For instance in Fig . 3 , one can see that permuting “ John ” by “ Sasha ” would not change embeddings ( as long as both name share the same parameters ) as both words will still appear in the same order in the context . Symbolic projection . Given an internal state h = hsem ⊕ hsym ∈ Rd+m , we interpret it as a distribution on the vocabulary p ∈ Rn with psem = softmax ( Bhsem ) psym = Bϕsoftmax ( hsym ) ( 3 ) p = β psem + ( 1− β ) psym ( 4 ) where B ∈ Rn×d and β ∈ [ 0 , 1 ] are parameters to learn . The final distribution is a mixture of two distributions psem and psym . The first one psem is seen as a semantic output as it increases answer probability of a word if its semantic embedding is closer 1From now on , we drop the subscript notation when there is no ambiguity and denote C , ϕ 2we later omit superscript dimension as they will always be implicitly defined 3Note that if αx = 0 for all word x , the model is reduced to a standard semantic-only model to the semantic state hsem . The second one psym interprets hsym as probabilities of words from the context using Bϕ to dereference positions . Indeed , denote p̃ = softmax ( hsym ) ∈ Rm and using Eq . 1 , we get that the i-th coefficient of psym is given by [ psym ] i = [ Bϕp̃ ] i = { p̃ϕ ( i ) if i ∈ C 0 else ( 5 ) Symbolic transformation . We perform a linear transformation of an internal symbolic representation h = hsem ⊕ hsym ∈ Rd+m with : hsem ⊕ hsym 7→Whsem ⊕ ( λI + γ11T ) hsym ∈ Rd+m , ( 6 ) where W ∈ Rd×d , λ ∈ R , γ ∈ R are parameters to learn , I ∈ Rm×m is the identity matrix and 1 = [ 1 , . . . , 1 ] T ∈ Rm×1 . The linear transformation for the symbolic part is taken from Zaheer et al . ( 2017 ) where it was shown to be the unique form of a linear parametric equivariant function . In our case , the symbolic transformation is invariant to permutation of symbolic coordinates , allowing the model to be independent from the choice of a particular bijection ϕ .
This paper proposes a new type of models that are equivariant to entity permutations, which is an important criterion to build language models that can easily generalize to new entities. The authors modified a Memory-Network and a Third-order tensor product RNN to make them symbolic-shit invariant. The new models were evaluated and compared on the 20 bAbi tasks. Results show that the symbolic versions of the models yield better performance than the original ones.
SP:6f78f139e4868101aba22e15be3678379fdccb6c
Symbol-Shift Equivariant Neural Networks
Neural networks have been shown to have poor compositionality abilities : while they can produce sophisticated output given sufficient data , they perform patchy generalization and fail to generalize to new symbols ( e.g . switching a name in a sentence by a less frequent one or one not seen yet ) . In this paper , we define a class of models whose outputs are equivariant to entity permutations ( an analog being convolution networks whose outputs are invariant through translation ) without requiring to specify or detect entities in a pre-processing step . We then show how two question-answering models can be made robust to entity permutation using a novel differentiable hybrid semantic-symbolic representation . The benefits of this approach are demonstrated on a set of synthetic NLP tasks where sample complexity and generalization are significantly improved even allowing models to generalize to words that are never seen in the training set . When using only 1K training examples for bAbi , we obtain a test error of 1.8 % and fail only one task while the best results reported so far obtained an error of 9.9 % and failed 7 tasks . 1 INTRODUCTION . Previous work have shown how neural networks fail to generalize to new symbols ( Lake & Baroni , 2018 ; Sinha et al. , 2019 ; Hupkes et al. , 2019 ) . In particular , Lake & Baroni ( 2018 ) showed that seq2seq models are able to perfectly learn a set of rules given enough data , however they fail to generalize these learned rules to new symbols . 4 ( original ) 100 1000 10000 Number of names 0 20 40 60 80 100 Te st e rro r 5 % error limit MN SMN TPR STPR tensor product RNN ( TPR ) ( Schlag & Schmidhuber , 2018 ) dramatically drops as the number of names increases in contrast to their symbolic counter-part SMN and STPR proposed in this paper . Both models reach low error well below the 5 % limit even when the number of names and vocabulary become considerably larger than the original task . The main contribution of this paper is the proposal of a hybrid semantic/symbolic representation that is equivariant to entity permutation . The main advantage and novelty of our approach is that entities are not required to be identified in advance as we rely solely on differentiation to determine whether a word acts like an entity . We show how to extend two question-answering models to handle this hybrid representation and demonstrate in extensive experiments the benefit of such an approach : the sample-complexity is significantly improved , better compositionality is obtained and symbolic models reach better accuracy on the studied tasks in particular when being trained with less training data . The paper starts by reviewing related works , we then introduce formally what it means to permute entities . We then define layers that are robust to such perturbation and show how two recent questionanswering models can be adapted in this context . Finally , experiments are conducted to assess the benefits of our method . 2 RELATED WORKS . Improving compositionality of neural networks has been an important on-going effort in the past years . The SCAN dataset proposed from Lake & Baroni ( 2018 ) initially showed how standard neural networks baselines can fail to generalize to new symbols when learning a set of artificially constructed rules . Several approaches were proposed to solve this issue . For instance , Lake ( 2019 ) designed meta-learning episodes that led the model to solve the task , ( Nye et al. , 2020 ) showed how one could infer symbolic neural programs with a similar meta-learning procedure . Alternatively , Gordon et al . ( 2020 ) proposed to design an equivariant model ( a model whose latent representations are unchanged when permuting symbols ) . A common limit of those approaches is that they require specifying which words are symbols in advance ( Lake ( 2019 ) ; Nye et al . ( 2020 ) also requires substantial amount of supervision and designing meta-episodes ) . An exception is Russin et al . ( 2019 ) which proposed to decompose syntax and semantic for SCAN . None of those approaches can generalize to arbitrary large amount of entities or entities not seen in the training as the one we propose . The problem of compositionality becomes much easier if symbols ( or entities ) are detected beforehand . For instance , Li et al . ( 2015 ) showed that replacing entities by dedicated token placeholders leads to significant improvement in question answering . The same approach has also been applied in Machine Translation and Data to Text generation Luong et al . ( 2015 ) ; Serban et al . ( 2016 ) ; Lebret et al . ( 2016 ) , to enable sequence to sequence models to generalize to unseen words at inference time . While specifying entities in advance ( or detecting them in a pre-processing step with Named-entity recognition ( Marsh & Perzanowski , 1998 ) ) before applying a model may give compositionality , we would clearly want instead models to be able to infer automatically whether a word should behave as a symbol or not . While positional encoding ( Graves et al. , 2014 ; Vaswani et al. , 2017 ) may give some compositionality - as it allows to reason over positions - this solution is not practical for language as inter-word distances are not fixed . For instance the distance between a noun and its verb varies and positional embedding is not enough to achieve compositionality ( Hupkes et al. , 2019 ) . An interesting line of research have been the study of equivariant models whose representations are invariant ( or equivariant ) to symmetries present in the data ( Zaheer et al. , 2017 ; Ravanbakhsh et al. , 2017 ) . Adding invariance to data symmetries has been theoretically shown to drastically reduces sample complexity ( Sannai & Imaizumi , 2019 ) . For instance , convolution neural networks require significantly less training data and achieve much better performance than a MLP as they are invariant to image translation . Gordon et al . ( 2020 ) proposed the first NLP model provably capable of handling symmetries between symbols albeit requiring the need to specify such symmetries in advance . Tensor product representation ( TPR ) Smolensky ( 1990 ) allows to stores complex relations between value and variables with distributed representations and offer some compositionality . Recently , Schlag & Schmidhuber ( 2018 ) proposed an architecture able to learn TPR parameters by differentiation and obtained state-of-the-art results for bAbi at the time of publishing . However , the compositionality of the proposed approach is limited ( as shown in Fig . 1 ) by the fact that every entity needs to be seen sufficiently many times so that a proper entity vector is found , in addition the model has been shown to learn orthogonal representation for entities which requires as many hidden dimensions as the total number of entities . Finally , the VARS approach ( Vankov & Bowers , 2020 ) consists in outputting a one-hot vector representing a symbolic variable that is randomly assigned to different positions during training to enforce compositionality . While this approach grants some compositionality , the approach is limited as one must draw symbol permutations so that each object is seen in all possible one-hot values . In addition , one must specify in advance which object or word behaves as a symbol and the method only support symbolic output and can not represent symbolic inputs nor perform computation with hybrid representation as we propose ( combining semantic and symbolic representation ) . 3 SYMBOL-SHIFT EQUIVARIANCE . When we learn to answer “ John ” given a specific context we would like to be able to answer “ Sasha ” if both names were permuted in the context . In what follows , we introduce the notion of symbolshift equivariance : e.g . a condition restricting possible permutations as some permutations may perturb the sentence grammar ( permuting “ John ” by “ why ” ) or cause ambiguity ( permuting “ John ” by “ Mary ” if the question involves a gender ) . We assume all words are taken from a vocabulary V which is a discrete set of n words . We are interested in providing an answer a ∈ V given a context consisting in a question q = [ q1 , . . . , qnq ] ∈ V nq and a list of sentences ( or stories ) x = [ x1 , . . . , xT ] with xi = [ xi1 , . . . , xini ] ∈ V ni . We denote X = ( x , q ) and Φ ( X ) = a the function that predicts the answer a ∈ V given the context X = ( x , q ) ∈ X . Let Γ : V → V a word permutation . Given a sequence [ y1 , . . . , yn ] , we define Γ ( y ) = [ Γ ( y1 ) , . . . , Γ ( yn ) ] where the permutation is applied to each word in the sequence , similarly , Γ ( X ) = ( [ Γ ( x1 ) , . . . , Γ ( xT ) ] , Γ ( q ) ) . Assuming each word has an associated vector parameter , we say that a permutation is a symbol-shift if it preserves vectors parameters . For instance in Figure 2 , a map permuting “ John ” with “ Sasha ” is a symbol-shift as both words share the same parameters but a map permuting “ John ” and “ lemon ” is not . Formally , assuming each word i ≤ n has an associated set of parameters vi ∈ RD , we say that a permutation Γ : V → V is a symbol-shift if vi = vΓ ( i ) for all i ≤ n. We are now ready to define symbol-shift equivariance . A critical advantage is that we do not need to specify symmetries in advance between entities , as we instead rely on vector semantics whose embeddings will be learned end-to-end . Definition 1 . Let Φ : X → V a function mapping a context to an answer . We say that Φ is symbolshift equivariant if for any symbol-shift Γ and for any X ∈ X , Φ ( Γ ( X ) ) = Γ ( Φ ( X ) ) 4 SYMBOLIC QUESTION ANSWERING . In this section , we show how to define symbol-shift-equivariant models . The main idea consists of concatenating two representations , a standard semantic representation in Rd as well as a symbolic representation in Rm where m denotes the number of distinct words in the stories and question . The symbolic representation will be made such that for i ≤ m , the i-th component of the symbolic representation corresponds to the i-th word appearing in the context . For instance in Figure 3 , there arem = 5 words in the context and “ apple ” is the fourth word by order of appearance so its symbolic embedding is the fourth one-hot vector . The model symbolic output has larger probability for the fourth word of the context which is dereferenced to “ Apple ” . We now describe formally how the symbolic representations are constructed and how we perform linear transformation and projection back to the original vocabulary . Finally , we derive the symbolic counter-part of Memory-Networks and TPR models that will be proved to be symbol-shift equivariant . Mapping words into and from symbolic representations . In each input example , the set of words present in the stories x and the question q is denoted as CX = { q } ∪ { { xi } , i ≤ T } and we letm = |CX | be the number of distinct words in the context . To project words to its symbolic representation Rm , we represent each unique word by a one-hot vector representing the position of its first appearance , using the bijection ϕX : CX → [ 1 , m ] 1 . To dereference a symbolic representation in Rm back to its vocabulary id , we define the matrixBϕ ∈ Rn×m that maps one-hot vectors of symbolic representations to one-hot vectors in Rn representing the word id in the vocabulary as shown in Fig . 3 , such that Bϕemj = e n ϕ−1 ( j ) for j ≤ m where ekl ∈ Rk denotes the l-th one-hot vector in Rk 2 . Note given a one-hot symbolic representation p̃ ∈ Rm , the i-th coordinates of Bϕp̃ ∈ Rn is given by : [ Bϕp̃ ] i = { p̃ϕ ( i ) if i ∈ C 0 else ( 1 ) hence Bϕ allows to dereference a symbolic representation p̃ ∈ Rm to a word vector Bϕp̃ ∈ Rn . Hybrid semantic-symbolic embeddings . We embed words with the concatenation of a standard semantic word embedding as well as a symbolic embedding respectively parametrized byA ∈ Rd×n and α ∈ [ 0 , 1 ] n. The semantic embedding maps a word x ∈ [ 1 , n ] to Aex ∈ Rd . While the symbolic embedding of x consists in the one-hot vector of the order of appearance of the word x in its context multiplied by a learned parameter . More precisely , it is defined as αxeϕ ( x ) ∈ Rm , where αx is an output of a sigmoid unit on a learnable parameter , i.e . 0 < αx < 1 , that indicates how much each word should behave as a symbol 3 , and eϕ ( x ) is the one-hot vector of the order of appearance of the word x in its context . The final embedding of a word of x then consists in the concatenation of the semantic and symbolic parts : x 7→ Aex ⊕ αxeϕ ( x ) ∈ Rd+m , ( 2 ) where ⊕ denotes the concatenation operator . Note that all parameters are differentiable allowing the model to learn both word semantic and how much each word should behave as a symbol . The symbolic part will be shown to allow the model to be robust to symbol permutation while being able to represent an arbitrary number of symbols and generalize to new ones . For instance in Fig . 3 , one can see that permuting “ John ” by “ Sasha ” would not change embeddings ( as long as both name share the same parameters ) as both words will still appear in the same order in the context . Symbolic projection . Given an internal state h = hsem ⊕ hsym ∈ Rd+m , we interpret it as a distribution on the vocabulary p ∈ Rn with psem = softmax ( Bhsem ) psym = Bϕsoftmax ( hsym ) ( 3 ) p = β psem + ( 1− β ) psym ( 4 ) where B ∈ Rn×d and β ∈ [ 0 , 1 ] are parameters to learn . The final distribution is a mixture of two distributions psem and psym . The first one psem is seen as a semantic output as it increases answer probability of a word if its semantic embedding is closer 1From now on , we drop the subscript notation when there is no ambiguity and denote C , ϕ 2we later omit superscript dimension as they will always be implicitly defined 3Note that if αx = 0 for all word x , the model is reduced to a standard semantic-only model to the semantic state hsem . The second one psym interprets hsym as probabilities of words from the context using Bϕ to dereference positions . Indeed , denote p̃ = softmax ( hsym ) ∈ Rm and using Eq . 1 , we get that the i-th coefficient of psym is given by [ psym ] i = [ Bϕp̃ ] i = { p̃ϕ ( i ) if i ∈ C 0 else ( 5 ) Symbolic transformation . We perform a linear transformation of an internal symbolic representation h = hsem ⊕ hsym ∈ Rd+m with : hsem ⊕ hsym 7→Whsem ⊕ ( λI + γ11T ) hsym ∈ Rd+m , ( 6 ) where W ∈ Rd×d , λ ∈ R , γ ∈ R are parameters to learn , I ∈ Rm×m is the identity matrix and 1 = [ 1 , . . . , 1 ] T ∈ Rm×1 . The linear transformation for the symbolic part is taken from Zaheer et al . ( 2017 ) where it was shown to be the unique form of a linear parametric equivariant function . In our case , the symbolic transformation is invariant to permutation of symbolic coordinates , allowing the model to be independent from the choice of a particular bijection ϕ .
The authors propose a network that is equivariant to entity permutations without requiring the pre-specification of the set of entities. To this end, the authors propose a hybrid semantic-symbolic embedding which they integrate into two QA models. Finally, the authors show significant gains on the bAbi tasks, with especially impressive gains in the 1K setting.
SP:6f78f139e4868101aba22e15be3678379fdccb6c
Estimation of Number of Communities in Assortative Sparse Networks
1 INTRODUCTION . Statistical analysis of network data has now become an extensively studied field within statistics and machine learning ( see ( Goldenberg et al. , 2010 ; Kolaczyk & Csárdi , 2014 ; Newman , 2018 ) for reviews ) . Network datasets show up in several disciplines . Examples include networks originating from biosciences such as gene regulation networks ( Emmert-Streib et al . ( 2014 ) ) , protein-protein interaction networks ( De Las Rivas & Fontanillo ( 2010 ) ) , structural ( Rubinov & Sporns ( 2010 ) ) and functional networks ( Friston ( 2011 ) ) of brain and epidemiological networks ( Reis et al . ( 2007 ) ) ; networks originating from social media such as Facebook , Twitter and LinkedIn ( Faloutsos et al . ( 2010 ) ) ; citation and collaboration networks ( Lehmann et al . ( 2003 ) ) ; information and technological networks such as internet-based networks ( Adamic & Glance ( 2005 ) ) , power networks ( Pagani & Aiello ( 2013 ) ) and cell-tower networks ( Isaacman et al . ( 2011 ) ) . There are several active areas of research in developing statistical methodologies for network data analysis and also deriving the theoretical properties of the statistical methods . In this paper , we focus on networks with community structure and finding the number of communities in networks with arbitrary sparsity level . The last two decades saw a resurgence of interest in a problem popularly known as “ community detection '' . A common problem definition is to partition N nodes in a graph into K communities such that there are differences in edge densities between within and between communities , where K is assumed to be known a priori . Estimating number of communities ( K ) has recently become active in the literature . While the initial focus in the literature for estimating K has been developing algorithms and drawing support from domain-specific intuition and empirical studies using the Stochastic Block Model ( SBM ) , first proposed in Holland et al . ( 1983 ) , ( such as , Saade et al . ( 2014a ) , Yan et al . ( 2018 ) ) , there has been recent progress on attaining theoretical understanding of community numbers . Bickel & Sarkar ( 2015 ) and Lei et al . ( 2016 ) proposed hypothesis testing approaches based on principal eigenvalues or singular values . Some likelihood-based methods using the BIC criterion were proposed by Wang et al . ( 2017 ) and Hu et al . ( 2019 ) . From a Bayesian perspective , Riolo et al . ( 2017 ) discussed priors for number of communities under the SBM and designed an Markov Chain Monte Carlo algorithm , Kemp et al . ( 2006 ) presented a nonparametric Bayesian approach for detecting concept systems , Xu et al . ( 2006 ) introduced an infinite-state latent variable as part of a Dirichlet process mixture model , and Cerqueira & Leonardi ( 2020 ) proposed an estimator based on integrated likelihood for the SBM . Rosvall & Bergstrom ( 2007 ) introduced the concept of the minimum description length ( MDL ) to describe network modularities in partitioning networks , and Peixoto ( 2013 ) employed MDL to detect the number of communities . Chen & Lei ( 2018 ) and Li et al . ( 2020 ) proposed cross-validation based approaches with theoretical guarantees to estimate K. Yan et al . ( 2018 ) proposed a semi-definite programming approach , and Ma et al . ( 2018 ) proposed an estimator based on the loss of binary segmentation using pseudo-likelihood ratio . All of these approaches had theoretical guarantees . However , all the theoretical results were obtained under the assumption that mean density of the networks is greater than log ( N ) . Methods based on the spectrum of a certain class of matrices have become increasingly popular in recent years as non-parametric alternatives that are more computationally efficient and applicable to a wider range of settings . Most notably the non-backtracking matrices ( e.g. , Krzakala et al . ( 2013 ) , Saade et al . ( 2014b ) , Coste & Zhu ( 2019 ) , Bordenave et al . ( 2015 ) , Saade et al . ( 2016 ) ) and the Bethe Hessian matrices ( e.g. , Saade et al . ( 2015b ) , Lelarge ( 2018 ) , Dall ’ Amico et al . ( 2019 ) , Saade et al . ( 2015a ) , Dall ’ Amico et al . ( 2020 ) , Saade et al . ( 2014a ) , Le & Levina ( 2015 ) ) have received much attention due to their non-parametric form and competitive performance in the presence of degree heterogeneity and sparsity . In particular , unlike the non-backtracking operator , the Bethe Hessian is a real symmetric operator and hence offers additional computational advantages . Through simulations , Saade et al . ( 2014a ) demonstrated that the Bethe Hessian outperformed the non-backtracking operator , belief propagation , and the adjacency matrices on clustering on both accuracy and efficiency . Le & Levina ( 2015 ) proved the consistency of the method based on the spectrum of the Bethe Hessian operator in semi-dense regimes , i.e. , with the expected degree d̃ log ( N ) and the scalar parameter chosen from the two values commonly used in the literature based on heuristics for assortative and disassortative networks . However , other than the two candidate values and their variations , there are no other known values for the scalar parameter to ensure the consistency result in any regime . Furthermore , real-world networks are generally much more sparse and there is no theoretical result in the literature that guarantees the effectiveness of the Bethe Hessian operator in more sparse regimes . Our contribution . In this paper , we contribute to the theoretical understanding of the Bethe Hessian operator in estimating K for networks generated from the SBM in any regime regardless of the sparsity . We have three main contributions . • We show that the method of estimating K based on the spectral properties of the Bethe Hessian matrix ( `` spectral method '' ) is statistically consistent , even in regimes more sparse than those previously considered in the literature , with the expected degree 1 d̃ log ( N ) . The precise definition of d̃ is given in §2.1 . • We provide the first-of-its-kind interval of values for the scalar parameter of the Bethe Hessian operator that serves as a sufficient condition for the spectral method to correctly estimate K asymptotically in network data . • Through extensive simulations , we demonstrate that for any value chosen from the interval for the scalar parameter , the spectral method correctly estimates K in networks regardless of sparsity . We also consider the heuristics-based values commonly used in the literature for the scalar parameter in the context of the interval . The paper is arranged as follows . We present the definitions and a formal problem statement in §2 . We present our main theoretical result and a sketch of the proof in §3 , followed by empirical methods in §4 . The simulation results and concluding remarks are given in §5 and §6 , respectively . 2 PRELIMINARIES . 2.1 NOTATION . An adjacency matrix , denoted by A , is a random matrix whose rows and columns are labeled by nodes i , j ∈ [ N ] , where Aij = 1 if there is an edge between nodes i and j and 0 otherwise , and [ N ] denotes the set { 1 , . . . , N } . The mean observed degree is denoted by d̄ : = 1N 1 T NA1N and the expected degree by d̃ : = 1N 1 T NEA1N . λ ↓ ` ( A ) denotes the ` -th largest eigenvalue of A and λ ↑ ` ( A ) denotes the ` -th smallest eigenvalue of A . 2.2 THE STOCHASTIC BLOCK MODEL . The stochastic block model ( SBM ) is a simple generative model for network data that embeds a community structure in an adjacency matrix AN×N of the randomly generated network . SBM has three parameters : ( 1 ) the number of communities K ; ( 2 ) the membership vector z = ( z1 , ... , zN ) that assigns a community label zi ∈ [ K ] to each node i ∈ [ N ] ; and ( 3 ) the connectivity probability matrix BK×K where the elementBab represents the probability of an edge between nodes belonging to community a and b , where a , b ∈ [ K ] . Z ∈ ZN×K > 0 is defined as the community membership matrix such that Zij = 1 if node i belongs to community j and 0 otherwise . We denote the maximum expected degree by dmax : = N maxi ∑N j=1 [ ( ZBZ T ) ij − Diag ( ZBZT ) ij ] and the maximum entry in matrix B by d/N , where d : = N maxa , b∈ [ K ] Bab . λ denotes the smallest eigenvalue of the normalized B matrix , λ : = λ↓K ( N d B ) . Ā is the expectation of A and is computed as Ā = ZBZT − Diag ( ZBZT ) . D̄ is a diagonal matrix whose i-th diagonal entry is the sum of the i-th row of Ā . Let N be the vector of true community sizes and Nmin denotes the number of nodes in the community with the lowest number of nodes in it . A network generated from the SBM with parameters K , B , Z is defined to be assortative if Baa > Bab for all a , b ∈ [ K ] with a 6= b , and if B has all positive eigenvalues ( i.e. , B has full-rankK ) . The existing works in the literature on the spectral method referenced above have considered assortative networks , and we also consider assortative networks in this paper . 2.3 THE BETHE HESSIAN MATRIX . The Bethe Hessian matrix associated with an adjacency matrix A is defined as Hζ : = ( ζ2 − 1 ) IN + D− ζA ( 2.1 ) where ζ > 1 is a real scalar parameter , D : = Diag ( A1N ) is a diagonal matrix whose i-th diagonal entry corresponds to the degree of the i-th node , and IN is an identity matrix of dimension N ×N . As a real symmetric operator , Hζ is analytically tractable and computationally efficient , and has a number of useful properties . Saade et al . ( 2014a ) demonstrated that the community structure in A can be recovered by applying a standard clustering algorithm ( such as k-means clustering ) to the eigenvectors of Hζ corresponding to negative eigenvalues . In the spectral clustering literature , those eigenvalues whose eigenvectors encode the community structure are known as the informative eigenvalues and have been observed to be well-separated from the bulk of the spectrum . In Saade et al . ( 2014a ) , ζ was set to be the square-root of the mean observed degree as a heuristic to render informative ( negative ) eigenvalues of Hζ . Le & Levina ( 2015 ) showed that the number of informative eigenvalues of Hζ directly estimateK in the semi-dense regime ( d̃ log ( N ) ) when ζ is set to be either rm : = ( d1+ · · ·+dN ) −1 ( d21+ · · ·+ d2N ) −1 or ra : = √ ( d1 + · · ·+ dN ) /N . Both rm and ra are obtained based on heuristic arguments and are commonly used in the literature to estimate the radius of the bulk of the spectra . ra was considered in Saade et al . ( 2014a ) and the choice of rm stems from the deep connection between the spectrum of Hζ and that of another matrix which is known as the non-backtracking operator B. Denoting by m the number of edges in A , B is a 2m× 2m matrix indexed by directed edges i→ j and defined Bi→j , k→l = δjk ( 1− δil ) , where δ is the Kronecker delta and m is the number of edges . As in Hζ , the informative eigenvalues of B are well-separated from the bulk of its spectrum and are real , so it also has been used to develop many popular non-parametric methods for clustering ( see e.g. , Saade et al . ( 2014b ) , Coste & Zhu ( 2019 ) , Bordenave et al . ( 2015 ) , Bruna & Li ( 2017 ) , Gulikers et al . ( 2016 ) ) . This deep connection between Hζ and B was noted in Krzakala et al . ( 2013 ) and can be summarized by the phenomenon that , given any eigenvalue ν of B , the determinant of Hν vanishes . However , unlike Hζ , B is non-symmetric and its dimension ( 2m × 2m ) can get quite large . These present analytical and computational challenges when using B , and in turn have popularized Hζ as a tool for clustering . Le & Levina ( 2015 ) showed that in semi-dense regimes with expected degree d̃ log ( N ) , the number of negative eigenvalues of Hζ directly estimate K for ζ ∈ { rm , ra } , where the methods were called BHm and BHa . In addition , it was noted that the number of negative eigenvalues of Hζ tend to underestimate K when networks are unbalanced . Hence , corrections for BHm and BHa were proposed , namely BHmc and BHac , which heuristically estimate K̂ = max { k : tρn−k+1 6 ρn−k } where ρ1 > · · · > ρN are sorted eigenvalues and t > 0 is the hyperparameter . In light of this , we present the following problem we focus on in this paper : Problem Definition : Suppose that we observe one network generated from the SBM , where the parameters K , Z , B satisfy ( i ) assortativity , and ( ii ) the sparsity condition d̃ = o ( log ( N ) ) . For the appropriate choices of ζ , are the negative eigenvalues of the Bethe Hessian matrix Hζ still informative for estimating K ? If so , what are the appropriate choices for ζ ? Can there be other heuristic choices for ζ ? Are the popular heuristic choices of ζ , i.e. , rm and ra as defined above ( hereinafter “ heuristic choices '' ) , appropriate in the above sense ?
The authors propose a spectral framework using the Bethe Hessian matrix to infer the number of communities in sparse networks. The method relies on the eigendecomposition of the Bethe Hessian matrix for which negative eigenvalues are preserved and the number of such eigenvalues used to define the number of communities. In particular, theoretical guarantees for settings of the scalar of the Bethe Hessian matrix is derived including an associated spectral estimation procedure.
SP:93ca1ca8da285e1dbe05f3c83a51042ad0a1b3be
Estimation of Number of Communities in Assortative Sparse Networks
1 INTRODUCTION . Statistical analysis of network data has now become an extensively studied field within statistics and machine learning ( see ( Goldenberg et al. , 2010 ; Kolaczyk & Csárdi , 2014 ; Newman , 2018 ) for reviews ) . Network datasets show up in several disciplines . Examples include networks originating from biosciences such as gene regulation networks ( Emmert-Streib et al . ( 2014 ) ) , protein-protein interaction networks ( De Las Rivas & Fontanillo ( 2010 ) ) , structural ( Rubinov & Sporns ( 2010 ) ) and functional networks ( Friston ( 2011 ) ) of brain and epidemiological networks ( Reis et al . ( 2007 ) ) ; networks originating from social media such as Facebook , Twitter and LinkedIn ( Faloutsos et al . ( 2010 ) ) ; citation and collaboration networks ( Lehmann et al . ( 2003 ) ) ; information and technological networks such as internet-based networks ( Adamic & Glance ( 2005 ) ) , power networks ( Pagani & Aiello ( 2013 ) ) and cell-tower networks ( Isaacman et al . ( 2011 ) ) . There are several active areas of research in developing statistical methodologies for network data analysis and also deriving the theoretical properties of the statistical methods . In this paper , we focus on networks with community structure and finding the number of communities in networks with arbitrary sparsity level . The last two decades saw a resurgence of interest in a problem popularly known as “ community detection '' . A common problem definition is to partition N nodes in a graph into K communities such that there are differences in edge densities between within and between communities , where K is assumed to be known a priori . Estimating number of communities ( K ) has recently become active in the literature . While the initial focus in the literature for estimating K has been developing algorithms and drawing support from domain-specific intuition and empirical studies using the Stochastic Block Model ( SBM ) , first proposed in Holland et al . ( 1983 ) , ( such as , Saade et al . ( 2014a ) , Yan et al . ( 2018 ) ) , there has been recent progress on attaining theoretical understanding of community numbers . Bickel & Sarkar ( 2015 ) and Lei et al . ( 2016 ) proposed hypothesis testing approaches based on principal eigenvalues or singular values . Some likelihood-based methods using the BIC criterion were proposed by Wang et al . ( 2017 ) and Hu et al . ( 2019 ) . From a Bayesian perspective , Riolo et al . ( 2017 ) discussed priors for number of communities under the SBM and designed an Markov Chain Monte Carlo algorithm , Kemp et al . ( 2006 ) presented a nonparametric Bayesian approach for detecting concept systems , Xu et al . ( 2006 ) introduced an infinite-state latent variable as part of a Dirichlet process mixture model , and Cerqueira & Leonardi ( 2020 ) proposed an estimator based on integrated likelihood for the SBM . Rosvall & Bergstrom ( 2007 ) introduced the concept of the minimum description length ( MDL ) to describe network modularities in partitioning networks , and Peixoto ( 2013 ) employed MDL to detect the number of communities . Chen & Lei ( 2018 ) and Li et al . ( 2020 ) proposed cross-validation based approaches with theoretical guarantees to estimate K. Yan et al . ( 2018 ) proposed a semi-definite programming approach , and Ma et al . ( 2018 ) proposed an estimator based on the loss of binary segmentation using pseudo-likelihood ratio . All of these approaches had theoretical guarantees . However , all the theoretical results were obtained under the assumption that mean density of the networks is greater than log ( N ) . Methods based on the spectrum of a certain class of matrices have become increasingly popular in recent years as non-parametric alternatives that are more computationally efficient and applicable to a wider range of settings . Most notably the non-backtracking matrices ( e.g. , Krzakala et al . ( 2013 ) , Saade et al . ( 2014b ) , Coste & Zhu ( 2019 ) , Bordenave et al . ( 2015 ) , Saade et al . ( 2016 ) ) and the Bethe Hessian matrices ( e.g. , Saade et al . ( 2015b ) , Lelarge ( 2018 ) , Dall ’ Amico et al . ( 2019 ) , Saade et al . ( 2015a ) , Dall ’ Amico et al . ( 2020 ) , Saade et al . ( 2014a ) , Le & Levina ( 2015 ) ) have received much attention due to their non-parametric form and competitive performance in the presence of degree heterogeneity and sparsity . In particular , unlike the non-backtracking operator , the Bethe Hessian is a real symmetric operator and hence offers additional computational advantages . Through simulations , Saade et al . ( 2014a ) demonstrated that the Bethe Hessian outperformed the non-backtracking operator , belief propagation , and the adjacency matrices on clustering on both accuracy and efficiency . Le & Levina ( 2015 ) proved the consistency of the method based on the spectrum of the Bethe Hessian operator in semi-dense regimes , i.e. , with the expected degree d̃ log ( N ) and the scalar parameter chosen from the two values commonly used in the literature based on heuristics for assortative and disassortative networks . However , other than the two candidate values and their variations , there are no other known values for the scalar parameter to ensure the consistency result in any regime . Furthermore , real-world networks are generally much more sparse and there is no theoretical result in the literature that guarantees the effectiveness of the Bethe Hessian operator in more sparse regimes . Our contribution . In this paper , we contribute to the theoretical understanding of the Bethe Hessian operator in estimating K for networks generated from the SBM in any regime regardless of the sparsity . We have three main contributions . • We show that the method of estimating K based on the spectral properties of the Bethe Hessian matrix ( `` spectral method '' ) is statistically consistent , even in regimes more sparse than those previously considered in the literature , with the expected degree 1 d̃ log ( N ) . The precise definition of d̃ is given in §2.1 . • We provide the first-of-its-kind interval of values for the scalar parameter of the Bethe Hessian operator that serves as a sufficient condition for the spectral method to correctly estimate K asymptotically in network data . • Through extensive simulations , we demonstrate that for any value chosen from the interval for the scalar parameter , the spectral method correctly estimates K in networks regardless of sparsity . We also consider the heuristics-based values commonly used in the literature for the scalar parameter in the context of the interval . The paper is arranged as follows . We present the definitions and a formal problem statement in §2 . We present our main theoretical result and a sketch of the proof in §3 , followed by empirical methods in §4 . The simulation results and concluding remarks are given in §5 and §6 , respectively . 2 PRELIMINARIES . 2.1 NOTATION . An adjacency matrix , denoted by A , is a random matrix whose rows and columns are labeled by nodes i , j ∈ [ N ] , where Aij = 1 if there is an edge between nodes i and j and 0 otherwise , and [ N ] denotes the set { 1 , . . . , N } . The mean observed degree is denoted by d̄ : = 1N 1 T NA1N and the expected degree by d̃ : = 1N 1 T NEA1N . λ ↓ ` ( A ) denotes the ` -th largest eigenvalue of A and λ ↑ ` ( A ) denotes the ` -th smallest eigenvalue of A . 2.2 THE STOCHASTIC BLOCK MODEL . The stochastic block model ( SBM ) is a simple generative model for network data that embeds a community structure in an adjacency matrix AN×N of the randomly generated network . SBM has three parameters : ( 1 ) the number of communities K ; ( 2 ) the membership vector z = ( z1 , ... , zN ) that assigns a community label zi ∈ [ K ] to each node i ∈ [ N ] ; and ( 3 ) the connectivity probability matrix BK×K where the elementBab represents the probability of an edge between nodes belonging to community a and b , where a , b ∈ [ K ] . Z ∈ ZN×K > 0 is defined as the community membership matrix such that Zij = 1 if node i belongs to community j and 0 otherwise . We denote the maximum expected degree by dmax : = N maxi ∑N j=1 [ ( ZBZ T ) ij − Diag ( ZBZT ) ij ] and the maximum entry in matrix B by d/N , where d : = N maxa , b∈ [ K ] Bab . λ denotes the smallest eigenvalue of the normalized B matrix , λ : = λ↓K ( N d B ) . Ā is the expectation of A and is computed as Ā = ZBZT − Diag ( ZBZT ) . D̄ is a diagonal matrix whose i-th diagonal entry is the sum of the i-th row of Ā . Let N be the vector of true community sizes and Nmin denotes the number of nodes in the community with the lowest number of nodes in it . A network generated from the SBM with parameters K , B , Z is defined to be assortative if Baa > Bab for all a , b ∈ [ K ] with a 6= b , and if B has all positive eigenvalues ( i.e. , B has full-rankK ) . The existing works in the literature on the spectral method referenced above have considered assortative networks , and we also consider assortative networks in this paper . 2.3 THE BETHE HESSIAN MATRIX . The Bethe Hessian matrix associated with an adjacency matrix A is defined as Hζ : = ( ζ2 − 1 ) IN + D− ζA ( 2.1 ) where ζ > 1 is a real scalar parameter , D : = Diag ( A1N ) is a diagonal matrix whose i-th diagonal entry corresponds to the degree of the i-th node , and IN is an identity matrix of dimension N ×N . As a real symmetric operator , Hζ is analytically tractable and computationally efficient , and has a number of useful properties . Saade et al . ( 2014a ) demonstrated that the community structure in A can be recovered by applying a standard clustering algorithm ( such as k-means clustering ) to the eigenvectors of Hζ corresponding to negative eigenvalues . In the spectral clustering literature , those eigenvalues whose eigenvectors encode the community structure are known as the informative eigenvalues and have been observed to be well-separated from the bulk of the spectrum . In Saade et al . ( 2014a ) , ζ was set to be the square-root of the mean observed degree as a heuristic to render informative ( negative ) eigenvalues of Hζ . Le & Levina ( 2015 ) showed that the number of informative eigenvalues of Hζ directly estimateK in the semi-dense regime ( d̃ log ( N ) ) when ζ is set to be either rm : = ( d1+ · · ·+dN ) −1 ( d21+ · · ·+ d2N ) −1 or ra : = √ ( d1 + · · ·+ dN ) /N . Both rm and ra are obtained based on heuristic arguments and are commonly used in the literature to estimate the radius of the bulk of the spectra . ra was considered in Saade et al . ( 2014a ) and the choice of rm stems from the deep connection between the spectrum of Hζ and that of another matrix which is known as the non-backtracking operator B. Denoting by m the number of edges in A , B is a 2m× 2m matrix indexed by directed edges i→ j and defined Bi→j , k→l = δjk ( 1− δil ) , where δ is the Kronecker delta and m is the number of edges . As in Hζ , the informative eigenvalues of B are well-separated from the bulk of its spectrum and are real , so it also has been used to develop many popular non-parametric methods for clustering ( see e.g. , Saade et al . ( 2014b ) , Coste & Zhu ( 2019 ) , Bordenave et al . ( 2015 ) , Bruna & Li ( 2017 ) , Gulikers et al . ( 2016 ) ) . This deep connection between Hζ and B was noted in Krzakala et al . ( 2013 ) and can be summarized by the phenomenon that , given any eigenvalue ν of B , the determinant of Hν vanishes . However , unlike Hζ , B is non-symmetric and its dimension ( 2m × 2m ) can get quite large . These present analytical and computational challenges when using B , and in turn have popularized Hζ as a tool for clustering . Le & Levina ( 2015 ) showed that in semi-dense regimes with expected degree d̃ log ( N ) , the number of negative eigenvalues of Hζ directly estimate K for ζ ∈ { rm , ra } , where the methods were called BHm and BHa . In addition , it was noted that the number of negative eigenvalues of Hζ tend to underestimate K when networks are unbalanced . Hence , corrections for BHm and BHa were proposed , namely BHmc and BHac , which heuristically estimate K̂ = max { k : tρn−k+1 6 ρn−k } where ρ1 > · · · > ρN are sorted eigenvalues and t > 0 is the hyperparameter . In light of this , we present the following problem we focus on in this paper : Problem Definition : Suppose that we observe one network generated from the SBM , where the parameters K , Z , B satisfy ( i ) assortativity , and ( ii ) the sparsity condition d̃ = o ( log ( N ) ) . For the appropriate choices of ζ , are the negative eigenvalues of the Bethe Hessian matrix Hζ still informative for estimating K ? If so , what are the appropriate choices for ζ ? Can there be other heuristic choices for ζ ? Are the popular heuristic choices of ζ , i.e. , rm and ra as defined above ( hereinafter “ heuristic choices '' ) , appropriate in the above sense ?
In this paper the authors consider the problem of computing the number of communities K in an arbitrarily sparse graph generated under the Stochastic Block Model (SMB). Previous studies that consider the problem of computing K show theoretical guarantees only in graphs with average degree $\Omega(\log n)$. One of the previous studies (namely, [1]) has shown that  the number of communities equals the number of negative eigenvalues of the Bethe Hessian matrix for graphs with expected average degree $\Omega(\log n)$ under SBM. In this paper the authors show that with an appropriate scalar parameter for the Bethe Hessian matrix of graphs with average degree $o(\log n)$, the same property still holds, and thus obtain a method for computing K in graphs of sublogarithmic density. In particular, the authors give an interval for the choice of the $\zeta$ scalar that depends on several parameters of the underlying SBM distribution.
SP:93ca1ca8da285e1dbe05f3c83a51042ad0a1b3be
Policy Optimization in Zero-Sum Markov Games: Fictitious Self-Play Provably Attains Nash Equilibria
1 INTRODUCTION . Multi-agent reinforcement learning ( MARL ) ( Bu et al. , 2008 ; Sutton & Barto , 2018 ) has achieved great empirical success , e.g. , in playing the game of Go ( Silver et al. , 2016 ; 2017 ) , Dota 2 ( Berner et al. , 2019 ) , and StarCraft 2 ( Vinyals et al. , 2019 ) , which are all driven by policy optimization algorithms which iteratively update the policies that are parameterized using deep neural networks . Empirically , the popularity of policy optimization algorithms for MARL is attributed to the observations that they usually converges faster than value-based methods that iteratively update the value functions ( Mnih et al. , 2016 ; O ’ Donoghue et al. , 2016 ) . Compared with their empirical success , the theoretical aspect of policy optimization algorithms in MARL setting ( Littman , 1994 ; Hu & Wellman , 2003 ; Conitzer & Sandholm , 2007 ; Pérolat et al. , 2016 ; Zhang et al. , 2018 ) remains less understood . Although convergence guarantees for various policy optimization algorithms have been established under the single-agent RL setting ( Sutton et al. , 2000 ; Konda & Tsitsiklis , 2000 ; Kakade , 2002 ; Agarwal et al. , 2019 ; Wang et al. , 2019 ) , extending those theoretical guarantees to arguably one of the simplest settings of MARL , two-player zero-sum Markov game , suffers from challenges in the following two aspects . First , in such a Markov game , each agent interact with the opponent as well as the environment . Seen from the perspective of each agent , it belongs to an environment that is altered by the actions of the opponent . As a result , due to the existence of an opponent , the policy optimization problem of each agent has a time-varying objective function , which is in stark contrast with the value-based methods such as value-iteration Shapley ( 1953 ) ; Littman ( 1994 ) , where there is a central controller which specifies the policies of both players . When the joint policy of both players are considered , the problem of solving the optimal value function corresponds to finding the fixed point of the Bellman operator , which is defined independently of the policy of the players . Second , when viewing the policy optimization in zero-sum Markov game as an optimization problem for both players together , although we have a fixed objective function , the problem is minimax optimization with a non-convex non-concave objective . Even for classical optimization , such a kind of optimization problem remains less less understood ( Cherukuri et al. , 2017 ; Rafique et al. , 2018 ; Daskalakis & Panageas , 2018 ; Mertikopoulos et al. , 2018 ) . It is observed that first-order methods such as gradient descent might fail to converge ( Balduzzi et al. , 2018 ; Mazumdar & Ratliff , 2018 ) . As an initial step to study policy optimization for MARL , we propose a novel policy optimization algorithm for any player of a multi-player Markov game , which is dubbed as smooth fictitious selfplay ( FSP ) . Specifically , when a player adopts smooth FSP , in each iteration , it first solves a policy evaluation problem that estimates the value function associate with the current joint policy of all players . Then it update its own policy via an entropy-regularized proximal policy optimization ( PPO ) Schulman et al . ( 2017 ) step , where the update direction is obtained from the estimated value function . This algorithm can be viewed as an extension of the fictitious play ( FP ) algorithm that is designed for normal-form games ( Von Neumann & Morgenstern , 2007 ; Shapley , 1953 ) and extensive-form games ( Heinrich et al. , 2015 ; Perolat et al. , 2018 ) to Markov-games . FP is a general algorithmic framework for solving games where an agent first infer the policy of the opponents and then adopt a policy that best respond to the inferred opponents . When viewing our algorithm as a FP method , instead of estimating the policies of the opponents directly , the agent infers the opponent implicitly by estimating the value function . Besides , policy update corresponds to a smoothed best-response policy Swenson & Poor ( 2019 ) based on the inferred value function . To examine the theoretical merits of the proposed algorithm , we focus on two-player zero-sum Markov games and let both players follow smooth FSP , i.e. , with self-play . Moreover , we restrict to a class of Lipschitz games ( Radanovic et al. , 2019 ) where the impact of each player ’ s policy change on the environment is Lipschitz continuous with respect to the magnitude of policy change . For such a Markov game , we tackle the challenge of non-stationarity by imposing entropy regularization which brings algorithmic stability . In addition , to establish convergence to Nash equilibrium , we explicitly characterize the geometry of the policy optimization problem from a functional perspective . Specifically , we prove that the objective function , as a bivariate function of the two players ’ policies , despite being non-convex and non-concave , satisfies a one-point strong monotonicity condition ( Facchinei & Pang , 2007 ) at a Nash equilibrium . Thanks to such benign geometry , we prove that smooth FSP converges to a neighborhood of a Nash equilibrium at a sublinear Õ ( 1/T ) rate , where T is the number of policy iterations and Õ hides logarithmic factors . Moreover , as a byproduct of our analysis , if any of the two players deviates from the proposed algorithm , it is shown the other player that follows smooth FSP exploits such deviation by finding the best-response policy at a same sublinear rate . Such a Hannan consistency property exhibited in our algorithm is related to Hennes et al . ( 2020 ) , which focus on normal-form games . Thus , our results also serve as a first step towards connecting regret between minimization in normal-form/extensive-form games and Markov games . Contribution . Our contribution is two-fold . First , we propose a novel policy optimization algorithm for Markov games , which can be viewed as a generalization of FP . Second , when applied to a class of two-player zero-sum Markov games satisfying a Lipschitz regularity condition , our algorithm provably enjoys global convergence to a neighborhood of a Nash equilibrium at a sublinear rate . To the best of our knowledge , we propose the first provable FSP-type algorithm with finite time convergence guarantee for zero-sum Markov games . Related Work . There is a large body of literature on the value-based methods to zero-sum Markov games ( Lagoudakis & Parr , 2012 ; Pérolat et al. , 2016 ; Zhang et al. , 2018 ; Zou et al. , 2019 ) . More recently , Perolat et al . ( 2018 ) prove that actor-critic fictitious play asymptotically converges to the Nash equilibrium , while our work provides finite time convergence guarantee to a neighborhood of a Nash equilibrium . In addition , Zhang et al . ( 2020 ) study the sample comlexity of planning algorithm in the model-based MARL settting as opposed to the model-free setting with function approximation in this paper . Closely related to smooth FSP proposed in this paper , there is a line of work in best-response algorithms ( Heinrich et al. , 2015 ; Heinrich & Silver , 2016 ) , which have also shown great empirical performances ( Dudziak , 2006 ; Xiao et al. , 2013 ; Kawamura et al. , 2017 ) . However , they are only applicable to extensive-form games and not directly applicable to stochastic games . Also , our smooth FSP is related to Swenson & Poor ( 2019 ) , which focus on the potential games . It does not enforce entropy-regularization and only provides asymptotic convergence guarantee to a neighborhood of the Nash equilibrium for smooth fictitious play in multi-player two-action potential games . Moreover , our work also falls into the realm of regularizing and smoothing techniques in reinforcement learning ( Dai et al. , 2017 ; Geist et al. , 2019 ; Shani et al. , 2019 ; Cen et al. , 2020 ) , which focus on the single-agent setting . 2 BACKGROUND . In this section , we briefly introduce the general setting of reinforcement learning for two-player zero-sum Markov games . Zero-Sum Markov Games . We consider the two-player zero-sum Markov game ( S , A1 , A2 , P , r , γ ) , where S ⊂ Rd is a compact state space , A1 and A2 are finite action spaces of Player 1 and Player 2 , respectively , P : S × S ×A1 ×A2 → [ 0 , 1 ] is the Markov transition kernel , r : S ×A1×A2 → [ −1 , 1 ] is the reward function of Player 1 , which implies that the reward function of Player 2 is −r , and γ ∈ ( 0 , 1 ) is the discount factor . Let r1 = r and r2 = −r be the reward functions of Player 1 and Player 2 , respectively . For notational simplicity , throughout this paper , we write Player −i as Player i ’ s opponent , where i ∈ { 1 , 2 } . In the rest of this paper , we omit i ∈ { 1 , 2 } where it is clear from the context . Also , we denote by Eπi , π−i [ · ] the expectation over the trajectory induced by the policy pair [ πi ; π−i ] . Given a policy π−i : A−i×S → [ 0 , 1 ] of Player−i , the performance of a policy πi : Ai×S → [ 0 , 1 ] of Player i is evaluated by its state-value function ( Vi-function ) V πi , π−i i : S → R , which is defined as V π i , π−i i ( s ) = Eπi , π−i [ ∞∑ t=0 γt · ri ( st , ait , a−it ) ∣∣∣∣ s0 = s ] . ( 2.1 ) Correspondingly , the performance of a policy πi : Ai × S → [ 0 , 1 ] of Player i is evaluated by its action-value function ( Qi-function ) Q πi , π−i i : S ×Ai×A−i → R , which is defined by the following Bellman equation , Qπ i , π−i i ( s , a i , a−i ) = ri ( s , a i , a−i ) + γ · Es′∼P ( · | s , ai , a−i ) [ V π i , π−i i ( s ′ ) ] . We denote by νπi , π−i ( s ) and σπi , π−i ( s , ai , a−i ) = πi ( ai | s ) · π−i ( a−i | s ) · νπi , π−i ( s ) the stationary state distribution and the stationary state-action distribution associated with the policy pair [ πi ; π−i ] , respectively . Correspondingly , we denote by Eσπi , π−i [ · ] and Eνπi , π−i [ · ] the expectations E ( s , ai , a−i ) ∼σπi , π−i [ · ] and Es∼νπi , π−i [ · ] , respectively . Throughout this paper , we denote by 〈· , ·〉 the inner product between vectors . Let [ π1∗ , π 2 ∗ ] be a Nash equilibrium of the two-player zero-sum Markov game ( S , A1 , A2 , P , r , γ ) , which exists ( Shapley , 1953 ) and satisfies J ( π1 , π2∗ ) ≤ J ( π1∗ , π2∗ ) ≤ J ( π1∗ , π2 ) for all policy pairs [ π1 ; π2 ] . Here we define the performance function as J ( π1 , π2 ) = Eν∗ [ V π 1 , π2 1 ( s ) ] , ( 2.2 ) where ν∗ is the stationary distribution σπ1∗ , π2∗ . Regularized Markov Games . Based on the definition of the two-player zero-sum Markov game ( S , A1 , A2 , P , r , γ ) , we define its entropy-regularized counterpart ( S , A1 , A2 , P , r , γ , λ1 , λ2 ) , where λ1 , λ2 ≥ 0 are the regularization parameters . Specifically , ( S , A1 , A2 , P , r , γ , λ1 , λ2 ) is defined as the two-player general-sum Markov game with the reward function of Player i replaced by its entropy-regularized counterpart rπ i , π−i ( i ) : S ×A i ×A−i → R , which is defined as rπ i , π−i ( i ) ( s , a i , a−i ) = ri ( s , a i , a−i ) − λi · log πi ( ai | s ) . ( 2.3 ) With a slight abuse of notation , we write rπ i , π−i i ( s ) = Eπi , π−i [ ri ( s , a i , a−i ) ] , rπ i , π−i ( i ) ( s ) = Eπi , π−i [ rπ i , π−i ( i ) ( s , a i , a−i ) ] = rπ i , π−i i ( s ) + λi ·H ( πi ( · | s ) ) as the state-reward function and the entropy-regularized state-reward function , respectively . Here H ( πi ( · | s ) ) = − ∑ ai∈Ai π i ( ai | s ) · log πi ( ai | s ) is the Shannon entropy . For Player i , the entropyregularized state-value function ( V ( i ) -function ) V πi , π−i ( i ) : S → R and the entropy-regularized action-value function ( Q ( i ) -function ) Q πi , π−i ( i ) : S ×A i ×A−i → R are defined as V π i , π−i ( i ) ( s ) = Eπi , π−i [ ∞∑ t=0 γt · rπ i , π−i ( i ) ( st , a i t , a −i t ) ∣∣∣∣ s0 = s ] , ( 2.4 ) Qπ i , π−i ( i ) ( s , a i , a−i ) = ri ( s , a i , a−i ) + γ · Es′∼P ( · | s , ai , a−i ) [ V π i , π−i ( i ) ( s ′ ) ] , ( 2.5 ) respectively . By the definition of rπ i , π−i ( i ) in ( 2.3 ) , we have that , for all policy pairs [ π i ; π−i ] and s ∈ S , ∣∣∣Eπi , π−i [ rπi , π−i ( i ) ( s , ai , a−i ) ] ∣∣∣ ≤ 1 + λi · log |Ai| , which , by ( 2.4 ) and ( 2.5 ) implies that , for all policy pairs [ πi ; π−i ] and ( s , ai , a−i ) ∈ S ×Ai ×A−i , ∣∣V πi , π−i ( i ) ( s ) ∣∣ ≤ V max ( i ) = 1 + λi · log |Ai|1− γ , ( 2.6 ) ∣∣Qπi , π−i ( i ) ( s , ai , a−i ) ∣∣ ≤ Qmax ( i ) = 1 + γ · ( 1 + λi · log |Ai| ) 1− γ . ( 2.7 )
The authors consider self-play in zero-sum discounted two-player Markov games with compact state space and finite actions. They present a smooth fictitious self-play algorithm where each player adopts an entropy-regularized policy optimization method with the average of the past generated Q-values. Under appropriate assumptions, among which a Lipschitz regularity of the Markov game, the authors prove that this algorithm approximates the Nash equilibrium at a rate O(1/T) where T is the number of iteration.
SP:529fd3a7215e22cd444370bd86d7b0522cdbd526
Policy Optimization in Zero-Sum Markov Games: Fictitious Self-Play Provably Attains Nash Equilibria
1 INTRODUCTION . Multi-agent reinforcement learning ( MARL ) ( Bu et al. , 2008 ; Sutton & Barto , 2018 ) has achieved great empirical success , e.g. , in playing the game of Go ( Silver et al. , 2016 ; 2017 ) , Dota 2 ( Berner et al. , 2019 ) , and StarCraft 2 ( Vinyals et al. , 2019 ) , which are all driven by policy optimization algorithms which iteratively update the policies that are parameterized using deep neural networks . Empirically , the popularity of policy optimization algorithms for MARL is attributed to the observations that they usually converges faster than value-based methods that iteratively update the value functions ( Mnih et al. , 2016 ; O ’ Donoghue et al. , 2016 ) . Compared with their empirical success , the theoretical aspect of policy optimization algorithms in MARL setting ( Littman , 1994 ; Hu & Wellman , 2003 ; Conitzer & Sandholm , 2007 ; Pérolat et al. , 2016 ; Zhang et al. , 2018 ) remains less understood . Although convergence guarantees for various policy optimization algorithms have been established under the single-agent RL setting ( Sutton et al. , 2000 ; Konda & Tsitsiklis , 2000 ; Kakade , 2002 ; Agarwal et al. , 2019 ; Wang et al. , 2019 ) , extending those theoretical guarantees to arguably one of the simplest settings of MARL , two-player zero-sum Markov game , suffers from challenges in the following two aspects . First , in such a Markov game , each agent interact with the opponent as well as the environment . Seen from the perspective of each agent , it belongs to an environment that is altered by the actions of the opponent . As a result , due to the existence of an opponent , the policy optimization problem of each agent has a time-varying objective function , which is in stark contrast with the value-based methods such as value-iteration Shapley ( 1953 ) ; Littman ( 1994 ) , where there is a central controller which specifies the policies of both players . When the joint policy of both players are considered , the problem of solving the optimal value function corresponds to finding the fixed point of the Bellman operator , which is defined independently of the policy of the players . Second , when viewing the policy optimization in zero-sum Markov game as an optimization problem for both players together , although we have a fixed objective function , the problem is minimax optimization with a non-convex non-concave objective . Even for classical optimization , such a kind of optimization problem remains less less understood ( Cherukuri et al. , 2017 ; Rafique et al. , 2018 ; Daskalakis & Panageas , 2018 ; Mertikopoulos et al. , 2018 ) . It is observed that first-order methods such as gradient descent might fail to converge ( Balduzzi et al. , 2018 ; Mazumdar & Ratliff , 2018 ) . As an initial step to study policy optimization for MARL , we propose a novel policy optimization algorithm for any player of a multi-player Markov game , which is dubbed as smooth fictitious selfplay ( FSP ) . Specifically , when a player adopts smooth FSP , in each iteration , it first solves a policy evaluation problem that estimates the value function associate with the current joint policy of all players . Then it update its own policy via an entropy-regularized proximal policy optimization ( PPO ) Schulman et al . ( 2017 ) step , where the update direction is obtained from the estimated value function . This algorithm can be viewed as an extension of the fictitious play ( FP ) algorithm that is designed for normal-form games ( Von Neumann & Morgenstern , 2007 ; Shapley , 1953 ) and extensive-form games ( Heinrich et al. , 2015 ; Perolat et al. , 2018 ) to Markov-games . FP is a general algorithmic framework for solving games where an agent first infer the policy of the opponents and then adopt a policy that best respond to the inferred opponents . When viewing our algorithm as a FP method , instead of estimating the policies of the opponents directly , the agent infers the opponent implicitly by estimating the value function . Besides , policy update corresponds to a smoothed best-response policy Swenson & Poor ( 2019 ) based on the inferred value function . To examine the theoretical merits of the proposed algorithm , we focus on two-player zero-sum Markov games and let both players follow smooth FSP , i.e. , with self-play . Moreover , we restrict to a class of Lipschitz games ( Radanovic et al. , 2019 ) where the impact of each player ’ s policy change on the environment is Lipschitz continuous with respect to the magnitude of policy change . For such a Markov game , we tackle the challenge of non-stationarity by imposing entropy regularization which brings algorithmic stability . In addition , to establish convergence to Nash equilibrium , we explicitly characterize the geometry of the policy optimization problem from a functional perspective . Specifically , we prove that the objective function , as a bivariate function of the two players ’ policies , despite being non-convex and non-concave , satisfies a one-point strong monotonicity condition ( Facchinei & Pang , 2007 ) at a Nash equilibrium . Thanks to such benign geometry , we prove that smooth FSP converges to a neighborhood of a Nash equilibrium at a sublinear Õ ( 1/T ) rate , where T is the number of policy iterations and Õ hides logarithmic factors . Moreover , as a byproduct of our analysis , if any of the two players deviates from the proposed algorithm , it is shown the other player that follows smooth FSP exploits such deviation by finding the best-response policy at a same sublinear rate . Such a Hannan consistency property exhibited in our algorithm is related to Hennes et al . ( 2020 ) , which focus on normal-form games . Thus , our results also serve as a first step towards connecting regret between minimization in normal-form/extensive-form games and Markov games . Contribution . Our contribution is two-fold . First , we propose a novel policy optimization algorithm for Markov games , which can be viewed as a generalization of FP . Second , when applied to a class of two-player zero-sum Markov games satisfying a Lipschitz regularity condition , our algorithm provably enjoys global convergence to a neighborhood of a Nash equilibrium at a sublinear rate . To the best of our knowledge , we propose the first provable FSP-type algorithm with finite time convergence guarantee for zero-sum Markov games . Related Work . There is a large body of literature on the value-based methods to zero-sum Markov games ( Lagoudakis & Parr , 2012 ; Pérolat et al. , 2016 ; Zhang et al. , 2018 ; Zou et al. , 2019 ) . More recently , Perolat et al . ( 2018 ) prove that actor-critic fictitious play asymptotically converges to the Nash equilibrium , while our work provides finite time convergence guarantee to a neighborhood of a Nash equilibrium . In addition , Zhang et al . ( 2020 ) study the sample comlexity of planning algorithm in the model-based MARL settting as opposed to the model-free setting with function approximation in this paper . Closely related to smooth FSP proposed in this paper , there is a line of work in best-response algorithms ( Heinrich et al. , 2015 ; Heinrich & Silver , 2016 ) , which have also shown great empirical performances ( Dudziak , 2006 ; Xiao et al. , 2013 ; Kawamura et al. , 2017 ) . However , they are only applicable to extensive-form games and not directly applicable to stochastic games . Also , our smooth FSP is related to Swenson & Poor ( 2019 ) , which focus on the potential games . It does not enforce entropy-regularization and only provides asymptotic convergence guarantee to a neighborhood of the Nash equilibrium for smooth fictitious play in multi-player two-action potential games . Moreover , our work also falls into the realm of regularizing and smoothing techniques in reinforcement learning ( Dai et al. , 2017 ; Geist et al. , 2019 ; Shani et al. , 2019 ; Cen et al. , 2020 ) , which focus on the single-agent setting . 2 BACKGROUND . In this section , we briefly introduce the general setting of reinforcement learning for two-player zero-sum Markov games . Zero-Sum Markov Games . We consider the two-player zero-sum Markov game ( S , A1 , A2 , P , r , γ ) , where S ⊂ Rd is a compact state space , A1 and A2 are finite action spaces of Player 1 and Player 2 , respectively , P : S × S ×A1 ×A2 → [ 0 , 1 ] is the Markov transition kernel , r : S ×A1×A2 → [ −1 , 1 ] is the reward function of Player 1 , which implies that the reward function of Player 2 is −r , and γ ∈ ( 0 , 1 ) is the discount factor . Let r1 = r and r2 = −r be the reward functions of Player 1 and Player 2 , respectively . For notational simplicity , throughout this paper , we write Player −i as Player i ’ s opponent , where i ∈ { 1 , 2 } . In the rest of this paper , we omit i ∈ { 1 , 2 } where it is clear from the context . Also , we denote by Eπi , π−i [ · ] the expectation over the trajectory induced by the policy pair [ πi ; π−i ] . Given a policy π−i : A−i×S → [ 0 , 1 ] of Player−i , the performance of a policy πi : Ai×S → [ 0 , 1 ] of Player i is evaluated by its state-value function ( Vi-function ) V πi , π−i i : S → R , which is defined as V π i , π−i i ( s ) = Eπi , π−i [ ∞∑ t=0 γt · ri ( st , ait , a−it ) ∣∣∣∣ s0 = s ] . ( 2.1 ) Correspondingly , the performance of a policy πi : Ai × S → [ 0 , 1 ] of Player i is evaluated by its action-value function ( Qi-function ) Q πi , π−i i : S ×Ai×A−i → R , which is defined by the following Bellman equation , Qπ i , π−i i ( s , a i , a−i ) = ri ( s , a i , a−i ) + γ · Es′∼P ( · | s , ai , a−i ) [ V π i , π−i i ( s ′ ) ] . We denote by νπi , π−i ( s ) and σπi , π−i ( s , ai , a−i ) = πi ( ai | s ) · π−i ( a−i | s ) · νπi , π−i ( s ) the stationary state distribution and the stationary state-action distribution associated with the policy pair [ πi ; π−i ] , respectively . Correspondingly , we denote by Eσπi , π−i [ · ] and Eνπi , π−i [ · ] the expectations E ( s , ai , a−i ) ∼σπi , π−i [ · ] and Es∼νπi , π−i [ · ] , respectively . Throughout this paper , we denote by 〈· , ·〉 the inner product between vectors . Let [ π1∗ , π 2 ∗ ] be a Nash equilibrium of the two-player zero-sum Markov game ( S , A1 , A2 , P , r , γ ) , which exists ( Shapley , 1953 ) and satisfies J ( π1 , π2∗ ) ≤ J ( π1∗ , π2∗ ) ≤ J ( π1∗ , π2 ) for all policy pairs [ π1 ; π2 ] . Here we define the performance function as J ( π1 , π2 ) = Eν∗ [ V π 1 , π2 1 ( s ) ] , ( 2.2 ) where ν∗ is the stationary distribution σπ1∗ , π2∗ . Regularized Markov Games . Based on the definition of the two-player zero-sum Markov game ( S , A1 , A2 , P , r , γ ) , we define its entropy-regularized counterpart ( S , A1 , A2 , P , r , γ , λ1 , λ2 ) , where λ1 , λ2 ≥ 0 are the regularization parameters . Specifically , ( S , A1 , A2 , P , r , γ , λ1 , λ2 ) is defined as the two-player general-sum Markov game with the reward function of Player i replaced by its entropy-regularized counterpart rπ i , π−i ( i ) : S ×A i ×A−i → R , which is defined as rπ i , π−i ( i ) ( s , a i , a−i ) = ri ( s , a i , a−i ) − λi · log πi ( ai | s ) . ( 2.3 ) With a slight abuse of notation , we write rπ i , π−i i ( s ) = Eπi , π−i [ ri ( s , a i , a−i ) ] , rπ i , π−i ( i ) ( s ) = Eπi , π−i [ rπ i , π−i ( i ) ( s , a i , a−i ) ] = rπ i , π−i i ( s ) + λi ·H ( πi ( · | s ) ) as the state-reward function and the entropy-regularized state-reward function , respectively . Here H ( πi ( · | s ) ) = − ∑ ai∈Ai π i ( ai | s ) · log πi ( ai | s ) is the Shannon entropy . For Player i , the entropyregularized state-value function ( V ( i ) -function ) V πi , π−i ( i ) : S → R and the entropy-regularized action-value function ( Q ( i ) -function ) Q πi , π−i ( i ) : S ×A i ×A−i → R are defined as V π i , π−i ( i ) ( s ) = Eπi , π−i [ ∞∑ t=0 γt · rπ i , π−i ( i ) ( st , a i t , a −i t ) ∣∣∣∣ s0 = s ] , ( 2.4 ) Qπ i , π−i ( i ) ( s , a i , a−i ) = ri ( s , a i , a−i ) + γ · Es′∼P ( · | s , ai , a−i ) [ V π i , π−i ( i ) ( s ′ ) ] , ( 2.5 ) respectively . By the definition of rπ i , π−i ( i ) in ( 2.3 ) , we have that , for all policy pairs [ π i ; π−i ] and s ∈ S , ∣∣∣Eπi , π−i [ rπi , π−i ( i ) ( s , ai , a−i ) ] ∣∣∣ ≤ 1 + λi · log |Ai| , which , by ( 2.4 ) and ( 2.5 ) implies that , for all policy pairs [ πi ; π−i ] and ( s , ai , a−i ) ∈ S ×Ai ×A−i , ∣∣V πi , π−i ( i ) ( s ) ∣∣ ≤ V max ( i ) = 1 + λi · log |Ai|1− γ , ( 2.6 ) ∣∣Qπi , π−i ( i ) ( s , ai , a−i ) ∣∣ ≤ Qmax ( i ) = 1 + γ · ( 1 + λi · log |Ai| ) 1− γ . ( 2.7 )
This paper studies the problem of learning to play a Nash equilibrium in two-player, zero-sum Markov games. This is a longstanding problem, with many algorithms proposed but relatively few theoretical convergence guarantees, and most of those either for quite restricted settings or with strong assumptions. This is in stark contrast to the stateless setting of Normal Form Games, where we have many strong theoretical convergence guarantees. The main algorithm is a version of the classic fictitious play algorithm. Like prior adaptations of fictitious play to Markov Games, it operates on the Q-values, but a key novelty (at least in the stateful setting; similar ideas were recently applied in a special case of normal form games by Swenson and Poor 2019) is the use of a particular form of regularization in the best response process. The main result is that as long as the game satisfies Lipschitz and Concentratability properties for each player when the other plays optimally and the policy updates are sufficiently accurate then play converges to a Nash equilibrium.
SP:529fd3a7215e22cd444370bd86d7b0522cdbd526
H-divergence: A Decision-Theoretic Probability Discrepancy Measure
1 INTRODUCTION . Quantifying the difference between two probability distributions is a fundamental problem in machine learning . Modelers choose different types of discrepancies , or probability divergences , to encode their prior knowledge , i.e . which aspects should be considered to evaluate the difference , and how they should be weighted . The divergences used in machine learning typically fall into two categories , integral probability metrics ( IPMs , Müller ( 1997 ) ) , and f -divergences ( Csiszár , 1964 ) . IPMs , such as the Wasserstein distance , maximum mean discrepancy ( MMD ) , are based on the idea that if two distributions are identical , any function should have the same expectation under both distributions . IPM is defined as the maximum difference in expectation for a set of functions . IPMs are used to define training objectives for generative models ( Arjovsky et al. , 2017 ) , perform independence tests ( Doran et al. , 2014 ) , robust optimization ( Esfahani & Kuhn , 2018 ) among many other applications . On the other hand , f -divergences , such as the KL divergence and the Jensen Shannon divergence , and are based on the idea that if two distributions are identical , they assign the same likelihood to every point , so the ratio of the likelihood always equals one . One can define a distance based on the how the likelihood ratio differs from one . KL divergence underlies some of the most commonly used training objectives for both supervised and unsupervised machine learning algorithms , such as minimizing the cross entropy loss . We propose a third category of divergences called H-divergences that overlaps with but does not equate the set of integral probability metrics or the set f -divergences . Our distance is based on a generalization ( DeGroot et al. , 1962 ) of Shannon entropy and the quadratic entropy ( Burbea & Rao , 1982 ) . Instead of measuring the best average code length of any encoding scheme ( Shannon entropy ) , the generalized entropy can choose any loss function ( rather than code length ) and set of actions ( rather than encoding schemes ) , and is defined as the best expected loss among the set of actions . In particular , given two distribution p and q , we compare the generalized entropy of the mixture distribution ( p + q ) /2 and the generalized entropy of p and q individually . Intuitively , if p and q are different , it is more difficult to minimize expected loss under the mixture distribution ( p+ q ) /2 , and hence the mixture distribution should have higher generalized entropy ; if p and q are identical , then the mixture distribution is identical to p or q , and hence should have the same generalized entropy . We define the divergence based on the difference between entropy of the mixture distribution and the entropy of individual distributions . Our distance strictly generalizes the maximum mean discrepancy and the Jensen Shannon divergence . We illustrate this via the Venn diagram in Figure 1 . This generalization allows us to choose special losses and actions spaces to leverage inductive biases and machine learning models from different problem domains . For example , if we choose the generalized entropy as the maximum log likelihood of deep generative models , we are able to recover a distance that works well for distributions over high dimensional images . To demonstrate the empirical utility of our proposed divergence , we use it for the task of two sample test , where the goal is to identify whether two sets of samples come from the same distribution or not . A test based on a probability discrepancy declares two sets of samples different if their discrepancy exceed some threshold . We use H-divergences based on generalized entropy defined by the log likelihood of off-the-shelf generative models . Compared to state-of-the-art tests based on e.g . MMD with deep kernels ( Liu et al. , 2020 ) , tests based on the H-divergence achieve better test power on a large set of benchmark datasets . As another application , we use H-divergence for sample quality evaluation , where the goal is to compare a set of samples ( e.g . generated images from a GAN ) with ground truth samples ( e.g . real images ) . We show that H-divergences generally monotonically increase with the amount of corruption added to the samples ( which should lead to worse sample quality ) , even in certain situations where the FID score ( Heusel et al. , 2017 ) is not monotonically increasing . Finally we show that H-Divergence can be used to understand whether distribution change affect decision making . As an illustrative example , we study whether climate change affect decision making in agriculture and energy production . Traditional divergences ( such as KL ) let policy makers measure if the climate has changed ; H-Divergence can provide additional information on whether the change is relevant to decision making for different social and economic activities . 2 BACKGROUND . 2.1 PROBABILITY DISTANCES . Let X denote a finite set or a finite dimensional vector space , and P ( X ) denote the set of probability distributions on X that have a density . We consider the problem of defining a probability divergence between any two distributions in P ( X ) , where a probability divergence is any function D : P ( X ) × P ( X ) → R that satisfies D ( p‖q ) ≥ 0 , D ( p‖p ) = 0 , ∀p , q ∈ P ( X ) ( Note that in general a divergence does not require D ( p‖q ) > 0 ∀p 6= q ) . Integral Probability Metrics Let F denote some set of functions X → R. The integral probability metrics is defined as IPMF ( p‖q ) = sup f∈F |Ep [ f ( X ) ] − Eq [ f ( X ) ] | Several important divergences belong to integral probability metrics . Examples include the Wasserstein distance , where F is the set of 1-Lipschitz functions ; the total variation distance , where F is the set of functions X → [ −1 , 1 ] . The maximum mean discrepancy ( MMD ) ( Rao , 1982 ; Burbea & Rao , 1984 ; Gretton et al. , 2012 ) chooses a kernel function k : X × X → R+ and is defined by MMD ( p‖q ) = Ep , pk ( X , Y ) + Eq , qk ( X , Y ) − 2Ep , qk ( X , Y ) MMD is an IPM where F is the unit norm functions in the RKHS associated with the kernel k. f -Divergences Choose any convex continuous function f : R+ → R such that f ( 1 ) = 0 , the f -Divergence is defined as ( assuming densities exist ) Df ( p‖q ) = Eq [ f ( p ( X ) /q ( X ) ) ] . Examples of f -Divergences include the KL divergence , where f : t 7→ t log t and the Jensen Shannon divergence , where f : t 7→ ( t+ 1 ) log ( 2 t+1 ) + t log t. Scoring Rule Distance ( Grünwald et al. , 2004 ; Gneiting & Raftery , 2007 ) Another large class of probability distances are defined by proper scoring rules . A function S : P ( X ) × P ( X ) → R is called a proper scoring rule if ∀p , q ∈ P ( X ) we have S ( p , q ) ≥ S ( p , p ) . Intuitively it is any function that is small when two distributions are identical and large when two distributions differ . Given a scoring rule S we can define a distance by DS ( p‖q ) = S ( p , q ) − S ( p , p ) . 2.2 H-ENTROPY . For any action space A and any loss function ` : X ×A → R , the H-entropy ( DeGroot et al. , 1962 ; DeGroot , 2005 ; Grünwald et al. , 2004 ) is defined as H ` ( p ) = infa∈A Ep [ ` ( X , a ) ] . In words , H-entropy is the Bayes optimal loss of a decision maker who must select some action a not for a particular x , but for an expectation over p ( x ) . H-entropy generalizes several important notions of uncertainty . Examples include : Shannon Entropy , where A as the set of probabilities P ( X ) , and ` ( x , a ) = − log a ( x ) ; Variance where A = X , and ` ( x , a ) = ‖x − a‖22 ; Predictive V-entropy , where A ⊂ P ( X ) is some subset of distributions , and ` ( x , a ) = − log a ( x ) ( Xu et al. , 2020 ) . The most important property that we will use is that the H entropy is concave . Lemma 1 . ( DeGroot et al. , 1962 ) For any choice of ` : X ×A → R , H ` is a concave function . This Lemma can be proved by observing that inf is a concave function , i.e. , it is always better to pick an optimal action for p and q separately rather than a single one for both . H ` ( αp+ ( 1− α ) q ) = inf a ( αEp [ ` ( X , a ) ] + ( 1− α ) Eq [ ` ( X , a ) ] ) ≥ α inf a Ep [ ` ( X , a ) ] + ( 1− α ) inf a Eq [ ` ( X , a ) ] = αH ` ( p ) + ( 1− α ) H ` ( q ) This Lemma reflects why H ` can be thought of as a measurement of entropy or uncertainty . If the distribution is more uncertain ( e.g . mixture of p and q rather than p and q separately ) then the optimal action always suffers a higher loss . 3 DEFINITION AND THEORETICAL PROPERTIES . 3.1 H-JENSEN SHANNON DIVERGENCE . As a warm up , we first present a special case of our definition . Definition 1 ( H-Jensen Shannon divergence ) . DJS ` ( p , q ) = H ` ( p+ q 2 ) − 1 2 ( H ` ( p ) +H ` ( q ) ) ( 1 ) The above is a divergence between p and q because H-entropy is concave , so DJS ` is always nonnegative . In particular , if we choose H ` as the Shannon entropy , Definition 1 recovers the usual Jensen Shannon divergence . Other special choices of entropy can also recover definitions in ( Burbea & Rao , 1982 ) . In addition , we can define a divergence for any convex combination αp + ( 1 − α ) q where α ∈ ( 0 , 1 ) but for this paper we only consider α = 1/2 . 3.2 GENERAL H-DIVERGENCE . In addition to the H-Jensen Shannon divergence , there are other functions based on the H-entropy that satisfy the requirements of a divergence . For example , the following quantity DMin ` = H ` ( p+ q 2 ) −min ( H ` ( p ) , H ` ( q ) ) ( 2 ) is also a valid divergence ( this will be proved later as a special case of Lemma 2 ) . We can define a general set of divergences that includes the above two divergences with the following definition : Definition 2 ( H-divergence ) . For two distributions p , q on X , choose any continuous function φ : R2 → R such that φ ( θ , λ ) > 0 whenever θ + λ > 0 and φ ( θ , λ ) = 0 whenever θ + λ = 0 , define Dφ ` ( p‖q ) = φ ( H ` ( p+ q 2 ) −H ` ( p ) , H ` ( p+ q 2 ) −H ` ( q ) ) Intuitively H ` ( p+q 2 ) − H ` ( p ) and H ` ( p+q 2 ) − H ` ( q ) measure how much more difficult it is to minimize loss on the mixture distribution ( p+ q ) /2 than on p and q respectively . φ is a general class of function that could convert these differences into a divergence , while satisfying the desirable properties in the next section . The H-divergence generalizes all the previous definitions , as shown by the following proposition . Therefore any property of H-divergence is inherited by e.g . H-Jensen Shannon divergence . Proposition 1 . Choose φ ( θ , λ ) = θ+λ2 thenD φ ` ( p , q ) is the H-Jensen Shannon divergence in Eq. ( 1 ) . Choose φ ( θ , λ ) = max ( θ , λ ) then Dφ ` ( p , q ) is the H-Min divergence in Eq . ( 2 ) .
This paper proposes a H divergence that is a generalization of many popular f divergences and IPMs. The paper gives an empirical estimator with convergence rates for this divergence, where the rates are very fast when the two distributions are equal. The paper shows how the empirical estimator has practical use for two sample tests and measuring the corruption of a sample. The proposed H divergence is "useful" when the two distributions are close to each other, but as the authors acknowledge in the future work, it is an open question whether it could be "useful" in other cases.
SP:8d4e00c5a4fac78c1fa8a9161fd4e5c72f7ad508
H-divergence: A Decision-Theoretic Probability Discrepancy Measure
1 INTRODUCTION . Quantifying the difference between two probability distributions is a fundamental problem in machine learning . Modelers choose different types of discrepancies , or probability divergences , to encode their prior knowledge , i.e . which aspects should be considered to evaluate the difference , and how they should be weighted . The divergences used in machine learning typically fall into two categories , integral probability metrics ( IPMs , Müller ( 1997 ) ) , and f -divergences ( Csiszár , 1964 ) . IPMs , such as the Wasserstein distance , maximum mean discrepancy ( MMD ) , are based on the idea that if two distributions are identical , any function should have the same expectation under both distributions . IPM is defined as the maximum difference in expectation for a set of functions . IPMs are used to define training objectives for generative models ( Arjovsky et al. , 2017 ) , perform independence tests ( Doran et al. , 2014 ) , robust optimization ( Esfahani & Kuhn , 2018 ) among many other applications . On the other hand , f -divergences , such as the KL divergence and the Jensen Shannon divergence , and are based on the idea that if two distributions are identical , they assign the same likelihood to every point , so the ratio of the likelihood always equals one . One can define a distance based on the how the likelihood ratio differs from one . KL divergence underlies some of the most commonly used training objectives for both supervised and unsupervised machine learning algorithms , such as minimizing the cross entropy loss . We propose a third category of divergences called H-divergences that overlaps with but does not equate the set of integral probability metrics or the set f -divergences . Our distance is based on a generalization ( DeGroot et al. , 1962 ) of Shannon entropy and the quadratic entropy ( Burbea & Rao , 1982 ) . Instead of measuring the best average code length of any encoding scheme ( Shannon entropy ) , the generalized entropy can choose any loss function ( rather than code length ) and set of actions ( rather than encoding schemes ) , and is defined as the best expected loss among the set of actions . In particular , given two distribution p and q , we compare the generalized entropy of the mixture distribution ( p + q ) /2 and the generalized entropy of p and q individually . Intuitively , if p and q are different , it is more difficult to minimize expected loss under the mixture distribution ( p+ q ) /2 , and hence the mixture distribution should have higher generalized entropy ; if p and q are identical , then the mixture distribution is identical to p or q , and hence should have the same generalized entropy . We define the divergence based on the difference between entropy of the mixture distribution and the entropy of individual distributions . Our distance strictly generalizes the maximum mean discrepancy and the Jensen Shannon divergence . We illustrate this via the Venn diagram in Figure 1 . This generalization allows us to choose special losses and actions spaces to leverage inductive biases and machine learning models from different problem domains . For example , if we choose the generalized entropy as the maximum log likelihood of deep generative models , we are able to recover a distance that works well for distributions over high dimensional images . To demonstrate the empirical utility of our proposed divergence , we use it for the task of two sample test , where the goal is to identify whether two sets of samples come from the same distribution or not . A test based on a probability discrepancy declares two sets of samples different if their discrepancy exceed some threshold . We use H-divergences based on generalized entropy defined by the log likelihood of off-the-shelf generative models . Compared to state-of-the-art tests based on e.g . MMD with deep kernels ( Liu et al. , 2020 ) , tests based on the H-divergence achieve better test power on a large set of benchmark datasets . As another application , we use H-divergence for sample quality evaluation , where the goal is to compare a set of samples ( e.g . generated images from a GAN ) with ground truth samples ( e.g . real images ) . We show that H-divergences generally monotonically increase with the amount of corruption added to the samples ( which should lead to worse sample quality ) , even in certain situations where the FID score ( Heusel et al. , 2017 ) is not monotonically increasing . Finally we show that H-Divergence can be used to understand whether distribution change affect decision making . As an illustrative example , we study whether climate change affect decision making in agriculture and energy production . Traditional divergences ( such as KL ) let policy makers measure if the climate has changed ; H-Divergence can provide additional information on whether the change is relevant to decision making for different social and economic activities . 2 BACKGROUND . 2.1 PROBABILITY DISTANCES . Let X denote a finite set or a finite dimensional vector space , and P ( X ) denote the set of probability distributions on X that have a density . We consider the problem of defining a probability divergence between any two distributions in P ( X ) , where a probability divergence is any function D : P ( X ) × P ( X ) → R that satisfies D ( p‖q ) ≥ 0 , D ( p‖p ) = 0 , ∀p , q ∈ P ( X ) ( Note that in general a divergence does not require D ( p‖q ) > 0 ∀p 6= q ) . Integral Probability Metrics Let F denote some set of functions X → R. The integral probability metrics is defined as IPMF ( p‖q ) = sup f∈F |Ep [ f ( X ) ] − Eq [ f ( X ) ] | Several important divergences belong to integral probability metrics . Examples include the Wasserstein distance , where F is the set of 1-Lipschitz functions ; the total variation distance , where F is the set of functions X → [ −1 , 1 ] . The maximum mean discrepancy ( MMD ) ( Rao , 1982 ; Burbea & Rao , 1984 ; Gretton et al. , 2012 ) chooses a kernel function k : X × X → R+ and is defined by MMD ( p‖q ) = Ep , pk ( X , Y ) + Eq , qk ( X , Y ) − 2Ep , qk ( X , Y ) MMD is an IPM where F is the unit norm functions in the RKHS associated with the kernel k. f -Divergences Choose any convex continuous function f : R+ → R such that f ( 1 ) = 0 , the f -Divergence is defined as ( assuming densities exist ) Df ( p‖q ) = Eq [ f ( p ( X ) /q ( X ) ) ] . Examples of f -Divergences include the KL divergence , where f : t 7→ t log t and the Jensen Shannon divergence , where f : t 7→ ( t+ 1 ) log ( 2 t+1 ) + t log t. Scoring Rule Distance ( Grünwald et al. , 2004 ; Gneiting & Raftery , 2007 ) Another large class of probability distances are defined by proper scoring rules . A function S : P ( X ) × P ( X ) → R is called a proper scoring rule if ∀p , q ∈ P ( X ) we have S ( p , q ) ≥ S ( p , p ) . Intuitively it is any function that is small when two distributions are identical and large when two distributions differ . Given a scoring rule S we can define a distance by DS ( p‖q ) = S ( p , q ) − S ( p , p ) . 2.2 H-ENTROPY . For any action space A and any loss function ` : X ×A → R , the H-entropy ( DeGroot et al. , 1962 ; DeGroot , 2005 ; Grünwald et al. , 2004 ) is defined as H ` ( p ) = infa∈A Ep [ ` ( X , a ) ] . In words , H-entropy is the Bayes optimal loss of a decision maker who must select some action a not for a particular x , but for an expectation over p ( x ) . H-entropy generalizes several important notions of uncertainty . Examples include : Shannon Entropy , where A as the set of probabilities P ( X ) , and ` ( x , a ) = − log a ( x ) ; Variance where A = X , and ` ( x , a ) = ‖x − a‖22 ; Predictive V-entropy , where A ⊂ P ( X ) is some subset of distributions , and ` ( x , a ) = − log a ( x ) ( Xu et al. , 2020 ) . The most important property that we will use is that the H entropy is concave . Lemma 1 . ( DeGroot et al. , 1962 ) For any choice of ` : X ×A → R , H ` is a concave function . This Lemma can be proved by observing that inf is a concave function , i.e. , it is always better to pick an optimal action for p and q separately rather than a single one for both . H ` ( αp+ ( 1− α ) q ) = inf a ( αEp [ ` ( X , a ) ] + ( 1− α ) Eq [ ` ( X , a ) ] ) ≥ α inf a Ep [ ` ( X , a ) ] + ( 1− α ) inf a Eq [ ` ( X , a ) ] = αH ` ( p ) + ( 1− α ) H ` ( q ) This Lemma reflects why H ` can be thought of as a measurement of entropy or uncertainty . If the distribution is more uncertain ( e.g . mixture of p and q rather than p and q separately ) then the optimal action always suffers a higher loss . 3 DEFINITION AND THEORETICAL PROPERTIES . 3.1 H-JENSEN SHANNON DIVERGENCE . As a warm up , we first present a special case of our definition . Definition 1 ( H-Jensen Shannon divergence ) . DJS ` ( p , q ) = H ` ( p+ q 2 ) − 1 2 ( H ` ( p ) +H ` ( q ) ) ( 1 ) The above is a divergence between p and q because H-entropy is concave , so DJS ` is always nonnegative . In particular , if we choose H ` as the Shannon entropy , Definition 1 recovers the usual Jensen Shannon divergence . Other special choices of entropy can also recover definitions in ( Burbea & Rao , 1982 ) . In addition , we can define a divergence for any convex combination αp + ( 1 − α ) q where α ∈ ( 0 , 1 ) but for this paper we only consider α = 1/2 . 3.2 GENERAL H-DIVERGENCE . In addition to the H-Jensen Shannon divergence , there are other functions based on the H-entropy that satisfy the requirements of a divergence . For example , the following quantity DMin ` = H ` ( p+ q 2 ) −min ( H ` ( p ) , H ` ( q ) ) ( 2 ) is also a valid divergence ( this will be proved later as a special case of Lemma 2 ) . We can define a general set of divergences that includes the above two divergences with the following definition : Definition 2 ( H-divergence ) . For two distributions p , q on X , choose any continuous function φ : R2 → R such that φ ( θ , λ ) > 0 whenever θ + λ > 0 and φ ( θ , λ ) = 0 whenever θ + λ = 0 , define Dφ ` ( p‖q ) = φ ( H ` ( p+ q 2 ) −H ` ( p ) , H ` ( p+ q 2 ) −H ` ( q ) ) Intuitively H ` ( p+q 2 ) − H ` ( p ) and H ` ( p+q 2 ) − H ` ( q ) measure how much more difficult it is to minimize loss on the mixture distribution ( p+ q ) /2 than on p and q respectively . φ is a general class of function that could convert these differences into a divergence , while satisfying the desirable properties in the next section . The H-divergence generalizes all the previous definitions , as shown by the following proposition . Therefore any property of H-divergence is inherited by e.g . H-Jensen Shannon divergence . Proposition 1 . Choose φ ( θ , λ ) = θ+λ2 thenD φ ` ( p , q ) is the H-Jensen Shannon divergence in Eq. ( 1 ) . Choose φ ( θ , λ ) = max ( θ , λ ) then Dφ ` ( p , q ) is the H-Min divergence in Eq . ( 2 ) .
The distance or divergence between two probability distributions is essential for machine learning. This paper introduces a new class of divergence functions based on optimal decision loss function. They first introduce a class of entropy functional, namely the loss function depending on the action and state. This type of function extends the classical entropy function, including negative Boltzman-Shannon entropy. Using it, they further construct a divergence based on the mixture of probability densities. Several propositions and numerical experiments demonstrate the effectiveness of proposed divergence functions.
SP:8d4e00c5a4fac78c1fa8a9161fd4e5c72f7ad508
Double Generative Adversarial Networks for Conditional Independence Testing
1 INTRODUCTION . Conditional independence ( CI ) is a fundamental concept in statistics and machine learning . Testing conditional independence is a key building block and plays a central role in a wide variety of statistical learning problems , for instance , causal inference ( Pearl , 2009 ) , graphical models ( Koller & Friedman , 2009 ) , dimension reduction ( Li , 2018 ) , among others . In this article , we aim at testing whether two random variables X and Y are conditionally independent given a set of confounding variables Z . That is , we test the hypotheses : H0 : X ⊥ Y | Z versus H1 : X 6⊥ Y | Z , ( 1 ) given the observed data of n i.i.d . copies { ( Xi , Yi , Zi ) } 1≤i≤n of ( X , Y , Z ) . For our problem , X , Y and Z can all be multivariate . However , the main challenge arises when the confounding set of variables Z is high-dimensional . As such , we primarily focus on the scenario with a univariate X and Y , and a multivariateZ . Meanwhile , our proposed method can be extended to the multivariateX and Y scenario as well . Another challenge is the limited sample size compared to the dimensionality ofZ . As a result , many existing tests are ineffective , with either an inflated type-I error , or not having enough power to detect the alternatives . See Section 2 for a detailed review . We propose a double generative adversarial networks ( GANs , Goodfellow et al. , 2014 ) -based inference procedure for the CI testing problem ( 1 ) . Our proposal involves two key components , a double GANs framework to learn two generators that approximate the conditional distribution of X given Z and Y given Z , and a maximum of generalized covariance measures of multiple combinations of the transformation functions of X and Y . We first establish that our test statistic is doubly-robust , which offers additional protections against potential misspecification of the conditional distributions ( see Theorems 1 and 2 ) . Second , we show the resulting test achieves a valid control of the type-I error asymptotically , and more importantly , under the conditions that are much weaker and practically more feasible ( see Theorem 3 ) . Finally , we prove the power of our test approaches one asymptotically ( see Theorem 4 ) , and demonstrate it is more powerful than the competing tests empirically . 2 RELATED WORKS . There has been a growing literature on conditional independence testing in recent years ; see ( Li & Fan , 2019 ) for a review . Broadly speaking , the existing testing methods can be cast into four main categories , the metric-based tests , e.g. , ( Su & White , 2007 ; 2014 ; Wang et al. , 2015 ) , the conditional randomization-based tests ( Candes et al. , 2018 ; Bellot & van der Schaar , 2019 ) , the kernel-based tests ( Fukumizu et al. , 2008 ; Zhang et al. , 2011 ) , and the regression-based tests ( Hoyer et al. , 2009 ; Zhang et al. , 2018 ; Shah & Peters , 2018 ) . There are other types of tests , e.g. , Bergsma ( 2004 ) ; Doran et al . ( 2014 ) ; Sen et al . ( 2017 ; 2018 ) ; Berrett et al . ( 2019 ) , to mention a few . The metric-based tests typically employ some kernel smoothers to estimate the conditional characteristic function or the distribution function of Y given X and Z. Kernel smoothers , however , are known to suffer from the curse of dimensionality , and as such , these tests are not suitable when the dimension of Z is high . The conditional randomization-based tests require the knowledge of the conditional distribution of X|Z ( Candes et al. , 2018 ) . If unknown , the type-I error rates of these tests rely critically on the quality of the approximation of this conditional distribution . Kernel-based test is built upon the notion of maximum mean discrepancy ( MMD , Gretton et al. , 2012 ) , and could have inflated type-I errors . The regression-based tests have valid type-I error control , but may suffer from inadequate power . Next , we discuss in detail the conditional randomization-based tests , in particular , the work of Bellot & van der Schaar ( 2019 ) , the regression-based and the MMD-based tests , since our proposal is closely related to them . 2.1 CONDITIONAL RANDOMIZATION-BASED TESTS . The family of conditional randomization-based tests is built upon the following basis . If the conditional distribution PX|Z ofX given Z is known , then one can independently drawX ( 1 ) i ∼ PX|Z=Zi for i = 1 , . . . , n , and these samples are independent of the observed samples Xi ’ s and Yi ’ s . Write X = ( X1 , . . . , Xn ) > , X ( 1 ) = ( X ( 1 ) 1 , . . . , X ( 1 ) n ) > , Y = ( Y1 , . . . , Yn ) > , and Z = ( Z1 , . . . , Zn ) > . Here we use boldface letters to denote data matrices that consist of n samples . The joint distributions of ( X , Y , Z ) and ( X ( 1 ) , Y , Z ) are the same under H0 . Any large difference between the two distributions can be interpreted as the evidence against H0 . Therefore , one can repeat the process M times , and generate X ( m ) i ∼ PX|Z=Zi , i = 1 , . . . , n , m = 1 , . . . , M . Write X ( m ) = ( X ( m ) 1 , . . . , X ( m ) n ) > . Then , for any given test statistic ρ = ρ ( X , Y , Z ) , its associated p-value is p = [ 1 + ∑M m=1 I { ρ ( X ( m ) , Y , Z ) ≥ ρ ( X , Y , Z ) } ] / ( 1 + M ) , where I ( · ) is the indicator function . Since the triplets ( X , Y , Z ) , ( X ( 1 ) , Y , Z ) , . . . , ( X ( M ) , Y , Z ) are exchangeable underH0 , the p-value is valid , and it satisfies that Pr ( p ≤ α|H0 ) ≤ α+ o ( 1 ) for any 0 < α < 1 . In practice , however , PX|Z is rarely known , and Bellot & van der Schaar ( 2019 ) proposed to approximate it using GANs . Specifically , they learned a generator GX ( · , · ) from the observed data , then took Zi and a noise variable v ( m ) i , X as input to obtain a sample X̃ ( m ) i , which minimizes the divergence between the distributions of ( Xi , Zi ) and ( X̃ ( m ) i , Zi ) . The p-value is then computed by replacing X ( m ) by X̃ ( m ) = ( X̃ ( m ) 1 , . . . , X̃ ( m ) n ) > . They called this test GCIT , short for generative conditional independence test . By Theorem 1 of Bellot & van der Schaar ( 2019 ) , the excess type-I error of this test is upper bounded by Pr ( p ≤ α|H0 ) − α ≤ EdTV ( P̃X|Z , PX|Z ) = E sup A |Pr ( X ∈ A|Z ) − Pr ( X̃ ( m ) ∈ A|Z ) | ≡ D , ( 2 ) where dTV is the total variation norm between two probability distributions , the supremum is taken over all measurable sets , and the expectations in ( 2 ) are taken with respect to Z . By definition , the quantity D on the right-hand-side of ( 2 ) measures the quality of the conditional distribution approximation . Bellot & van der Schaar ( 2019 ) argued that this error term is negligible due to the capacity of deep neural nets in estimating conditional distributions . To the contrary , we find this approximation error is usually not negligible , and consequently , it may inflate the type-I error and invalidate the test . We consider a simple example to further elaborate this . Example 1 . Suppose X is one-dimensional , and follows a simple linear regression model , X = Z > β0 + ε , where the error ε is independent of Z and ε ∼ N ( 0 , σ20 ) for some σ20 > 0 . Suppose we know a priori that the linear regression model holds . We thus estimate β0 by ordinary least squares , and denote the resulting estimator by β̂ . For simplicity , suppose σ20 is known too . For this simple example , we have the following result regarding the approximation error term D. Proposition 1 Suppose the linear regression model holds . The derived distribution P̃X|Z is N ( Zβ̂ , σ20In ) , where In is the n× n identity matrix . Then D is not o ( 1 ) . To facilitate the understanding of the convergence behavior of D , we sketch a few lines of the proof of Proposition 1 . A detailed proof is given in Appendix F.1 . Let P̃X|Z=Zi denote the conditional distribution of X̃ ( m ) i given Zi , which is N ( Z > i β̂ , σ 2 0 ) in this example . If D = o ( 1 ) , then , D̃ ≡ n1/2 √ Ed2TV ( P̃X|Z=Zi , PX|Z=Zi ) = o ( 1 ) . ( 3 ) In other words , the validity of GCIT requires the root mean squared total variation distance in ( 3 ) to converge at a faster rate than n−1/2 . However , this rate can not be achieved in general . In our simple Example 1 , we have D̃ ≥ c for some universal constant c > 0 . Consequently , D in ( 2 ) is not o ( 1 ) . Proposition 1 shows that , even if we know a priori that the linear model holds , D is not to decay to zero as n grows to infinity . In practice , we do not have such prior model information . Then it would be even more difficult to estimate the conditional distribution PX|Z . Therefore , using GANs to approximate PX|Z does not guarantee a negligible approximation error , nor the validity of the test . 2.2 REGRESSION-BASED TESTS . The family of regression-based tests is built upon a key quantity , the generalized covariance measure , GCM ( X , Y ) = 1 n n∑ i=1 { Xi − Ê ( Xi|Zi ) } { Yi − Ê ( Yi|Zi ) } , where Ê ( X|Z ) and Ê ( Y |Z ) are the predicted condition mean E ( X|Z ) and E ( Y |Z ) , respectively , by any supervised learner . When the prediction errors of Ê ( X|Z ) and Ê ( Y |Z ) satisfy certain convergence rates , Shah & Peters ( 2018 ) proved that GCM is asymptotically normal . Under H0 , the asymptotic mean of GCM is zero , and its asymptotic standard deviation can be consistently estimated by some standard error estimator , denoted by ŝ ( GCM ) . Therefore , at level α , we reject H0 , if |GCM|/ŝ ( GCM ) exceeds the upper α/2th quantile of a standard normal distribution . Such a test is valid . However , it may not have sufficient power to detect H1 . This is because the asymptotic mean of GCM equals GCM∗ ( X , Y ) = E { X−E ( X|Z ) } { Y −E ( Y |Z ) } . The regressionbased tests require |GCM∗| to be nonzero under H1 to have power . However , there is no guarantee of this requirement . We again consider a simple example to elaborate . Example 2 . Suppose X∗ , Y and Z are independent random variables . Besides , X∗ has mean zero , and X = X∗g ( Y ) for some function g. For this example , we have E ( X|Z ) = E ( X ) , since both X∗ and Y are independent of Z , and so is X . Besides , E ( X ) = E ( X∗ ) E { g ( Y ) } = 0 , since X∗ is independent of Y and E ( X∗ ) = 0 . As such , GCM∗ ( X , Y ) = E { X − E ( X ) } { Y − E ( Y |Z ) } = 0 for any function g. On the other hand , X and Y are conditionally dependent given Z , as long as g is not a constant function . Therefore , for this example , the regression-based tests would fail to discriminate betweenH0 andH1 .
The paper proposed a novel simulation based testing procedure for conditional independence: X\perp Y | Z. The testing procedure incorporate the techniques of GAN, which is especially useful for dealing with high-dimensional data. The testing procedure first learn the generative adversarial network that is able to simulate the conditional distribution of X|Z and Y|Z and check a particular kernel-based independence criteria presented in Eq.(4). Instead of kernel-based method that takes the supreme over a class of RKHS functions, the proposed testing procedure searches the maximum "discrepancy" over a class of neural network function by simulation. Empirical results show a well controlled type-I error and better test power performances compare to existing methods and a cancer data application is discussed.
SP:0b9ca8ea62df97ffbdae1028e156966f82edd366
Double Generative Adversarial Networks for Conditional Independence Testing
1 INTRODUCTION . Conditional independence ( CI ) is a fundamental concept in statistics and machine learning . Testing conditional independence is a key building block and plays a central role in a wide variety of statistical learning problems , for instance , causal inference ( Pearl , 2009 ) , graphical models ( Koller & Friedman , 2009 ) , dimension reduction ( Li , 2018 ) , among others . In this article , we aim at testing whether two random variables X and Y are conditionally independent given a set of confounding variables Z . That is , we test the hypotheses : H0 : X ⊥ Y | Z versus H1 : X 6⊥ Y | Z , ( 1 ) given the observed data of n i.i.d . copies { ( Xi , Yi , Zi ) } 1≤i≤n of ( X , Y , Z ) . For our problem , X , Y and Z can all be multivariate . However , the main challenge arises when the confounding set of variables Z is high-dimensional . As such , we primarily focus on the scenario with a univariate X and Y , and a multivariateZ . Meanwhile , our proposed method can be extended to the multivariateX and Y scenario as well . Another challenge is the limited sample size compared to the dimensionality ofZ . As a result , many existing tests are ineffective , with either an inflated type-I error , or not having enough power to detect the alternatives . See Section 2 for a detailed review . We propose a double generative adversarial networks ( GANs , Goodfellow et al. , 2014 ) -based inference procedure for the CI testing problem ( 1 ) . Our proposal involves two key components , a double GANs framework to learn two generators that approximate the conditional distribution of X given Z and Y given Z , and a maximum of generalized covariance measures of multiple combinations of the transformation functions of X and Y . We first establish that our test statistic is doubly-robust , which offers additional protections against potential misspecification of the conditional distributions ( see Theorems 1 and 2 ) . Second , we show the resulting test achieves a valid control of the type-I error asymptotically , and more importantly , under the conditions that are much weaker and practically more feasible ( see Theorem 3 ) . Finally , we prove the power of our test approaches one asymptotically ( see Theorem 4 ) , and demonstrate it is more powerful than the competing tests empirically . 2 RELATED WORKS . There has been a growing literature on conditional independence testing in recent years ; see ( Li & Fan , 2019 ) for a review . Broadly speaking , the existing testing methods can be cast into four main categories , the metric-based tests , e.g. , ( Su & White , 2007 ; 2014 ; Wang et al. , 2015 ) , the conditional randomization-based tests ( Candes et al. , 2018 ; Bellot & van der Schaar , 2019 ) , the kernel-based tests ( Fukumizu et al. , 2008 ; Zhang et al. , 2011 ) , and the regression-based tests ( Hoyer et al. , 2009 ; Zhang et al. , 2018 ; Shah & Peters , 2018 ) . There are other types of tests , e.g. , Bergsma ( 2004 ) ; Doran et al . ( 2014 ) ; Sen et al . ( 2017 ; 2018 ) ; Berrett et al . ( 2019 ) , to mention a few . The metric-based tests typically employ some kernel smoothers to estimate the conditional characteristic function or the distribution function of Y given X and Z. Kernel smoothers , however , are known to suffer from the curse of dimensionality , and as such , these tests are not suitable when the dimension of Z is high . The conditional randomization-based tests require the knowledge of the conditional distribution of X|Z ( Candes et al. , 2018 ) . If unknown , the type-I error rates of these tests rely critically on the quality of the approximation of this conditional distribution . Kernel-based test is built upon the notion of maximum mean discrepancy ( MMD , Gretton et al. , 2012 ) , and could have inflated type-I errors . The regression-based tests have valid type-I error control , but may suffer from inadequate power . Next , we discuss in detail the conditional randomization-based tests , in particular , the work of Bellot & van der Schaar ( 2019 ) , the regression-based and the MMD-based tests , since our proposal is closely related to them . 2.1 CONDITIONAL RANDOMIZATION-BASED TESTS . The family of conditional randomization-based tests is built upon the following basis . If the conditional distribution PX|Z ofX given Z is known , then one can independently drawX ( 1 ) i ∼ PX|Z=Zi for i = 1 , . . . , n , and these samples are independent of the observed samples Xi ’ s and Yi ’ s . Write X = ( X1 , . . . , Xn ) > , X ( 1 ) = ( X ( 1 ) 1 , . . . , X ( 1 ) n ) > , Y = ( Y1 , . . . , Yn ) > , and Z = ( Z1 , . . . , Zn ) > . Here we use boldface letters to denote data matrices that consist of n samples . The joint distributions of ( X , Y , Z ) and ( X ( 1 ) , Y , Z ) are the same under H0 . Any large difference between the two distributions can be interpreted as the evidence against H0 . Therefore , one can repeat the process M times , and generate X ( m ) i ∼ PX|Z=Zi , i = 1 , . . . , n , m = 1 , . . . , M . Write X ( m ) = ( X ( m ) 1 , . . . , X ( m ) n ) > . Then , for any given test statistic ρ = ρ ( X , Y , Z ) , its associated p-value is p = [ 1 + ∑M m=1 I { ρ ( X ( m ) , Y , Z ) ≥ ρ ( X , Y , Z ) } ] / ( 1 + M ) , where I ( · ) is the indicator function . Since the triplets ( X , Y , Z ) , ( X ( 1 ) , Y , Z ) , . . . , ( X ( M ) , Y , Z ) are exchangeable underH0 , the p-value is valid , and it satisfies that Pr ( p ≤ α|H0 ) ≤ α+ o ( 1 ) for any 0 < α < 1 . In practice , however , PX|Z is rarely known , and Bellot & van der Schaar ( 2019 ) proposed to approximate it using GANs . Specifically , they learned a generator GX ( · , · ) from the observed data , then took Zi and a noise variable v ( m ) i , X as input to obtain a sample X̃ ( m ) i , which minimizes the divergence between the distributions of ( Xi , Zi ) and ( X̃ ( m ) i , Zi ) . The p-value is then computed by replacing X ( m ) by X̃ ( m ) = ( X̃ ( m ) 1 , . . . , X̃ ( m ) n ) > . They called this test GCIT , short for generative conditional independence test . By Theorem 1 of Bellot & van der Schaar ( 2019 ) , the excess type-I error of this test is upper bounded by Pr ( p ≤ α|H0 ) − α ≤ EdTV ( P̃X|Z , PX|Z ) = E sup A |Pr ( X ∈ A|Z ) − Pr ( X̃ ( m ) ∈ A|Z ) | ≡ D , ( 2 ) where dTV is the total variation norm between two probability distributions , the supremum is taken over all measurable sets , and the expectations in ( 2 ) are taken with respect to Z . By definition , the quantity D on the right-hand-side of ( 2 ) measures the quality of the conditional distribution approximation . Bellot & van der Schaar ( 2019 ) argued that this error term is negligible due to the capacity of deep neural nets in estimating conditional distributions . To the contrary , we find this approximation error is usually not negligible , and consequently , it may inflate the type-I error and invalidate the test . We consider a simple example to further elaborate this . Example 1 . Suppose X is one-dimensional , and follows a simple linear regression model , X = Z > β0 + ε , where the error ε is independent of Z and ε ∼ N ( 0 , σ20 ) for some σ20 > 0 . Suppose we know a priori that the linear regression model holds . We thus estimate β0 by ordinary least squares , and denote the resulting estimator by β̂ . For simplicity , suppose σ20 is known too . For this simple example , we have the following result regarding the approximation error term D. Proposition 1 Suppose the linear regression model holds . The derived distribution P̃X|Z is N ( Zβ̂ , σ20In ) , where In is the n× n identity matrix . Then D is not o ( 1 ) . To facilitate the understanding of the convergence behavior of D , we sketch a few lines of the proof of Proposition 1 . A detailed proof is given in Appendix F.1 . Let P̃X|Z=Zi denote the conditional distribution of X̃ ( m ) i given Zi , which is N ( Z > i β̂ , σ 2 0 ) in this example . If D = o ( 1 ) , then , D̃ ≡ n1/2 √ Ed2TV ( P̃X|Z=Zi , PX|Z=Zi ) = o ( 1 ) . ( 3 ) In other words , the validity of GCIT requires the root mean squared total variation distance in ( 3 ) to converge at a faster rate than n−1/2 . However , this rate can not be achieved in general . In our simple Example 1 , we have D̃ ≥ c for some universal constant c > 0 . Consequently , D in ( 2 ) is not o ( 1 ) . Proposition 1 shows that , even if we know a priori that the linear model holds , D is not to decay to zero as n grows to infinity . In practice , we do not have such prior model information . Then it would be even more difficult to estimate the conditional distribution PX|Z . Therefore , using GANs to approximate PX|Z does not guarantee a negligible approximation error , nor the validity of the test . 2.2 REGRESSION-BASED TESTS . The family of regression-based tests is built upon a key quantity , the generalized covariance measure , GCM ( X , Y ) = 1 n n∑ i=1 { Xi − Ê ( Xi|Zi ) } { Yi − Ê ( Yi|Zi ) } , where Ê ( X|Z ) and Ê ( Y |Z ) are the predicted condition mean E ( X|Z ) and E ( Y |Z ) , respectively , by any supervised learner . When the prediction errors of Ê ( X|Z ) and Ê ( Y |Z ) satisfy certain convergence rates , Shah & Peters ( 2018 ) proved that GCM is asymptotically normal . Under H0 , the asymptotic mean of GCM is zero , and its asymptotic standard deviation can be consistently estimated by some standard error estimator , denoted by ŝ ( GCM ) . Therefore , at level α , we reject H0 , if |GCM|/ŝ ( GCM ) exceeds the upper α/2th quantile of a standard normal distribution . Such a test is valid . However , it may not have sufficient power to detect H1 . This is because the asymptotic mean of GCM equals GCM∗ ( X , Y ) = E { X−E ( X|Z ) } { Y −E ( Y |Z ) } . The regressionbased tests require |GCM∗| to be nonzero under H1 to have power . However , there is no guarantee of this requirement . We again consider a simple example to elaborate . Example 2 . Suppose X∗ , Y and Z are independent random variables . Besides , X∗ has mean zero , and X = X∗g ( Y ) for some function g. For this example , we have E ( X|Z ) = E ( X ) , since both X∗ and Y are independent of Z , and so is X . Besides , E ( X ) = E ( X∗ ) E { g ( Y ) } = 0 , since X∗ is independent of Y and E ( X∗ ) = 0 . As such , GCM∗ ( X , Y ) = E { X − E ( X ) } { Y − E ( Y |Z ) } = 0 for any function g. On the other hand , X and Y are conditionally dependent given Z , as long as g is not a constant function . Therefore , for this example , the regression-based tests would fail to discriminate betweenH0 andH1 .
This paper considers the problem of conditional independence testing, especially when the variables are high-dimensional. The authors proposed a double GAN based algorithm. Two GANs are designed to learn the conditional probability distributions P_{X|Z} and P_{Y|Z}, then used to generate samples to compute the test statistic. It is proved that the error of the test statistic is O_p(n^{-2k} \log n) when the total variation error of the GANS is O(n^{-k}). So to ensure that the test statistic converges, only o(\log^{-1/2} n) rate is required for the total variation error of the GANs.
SP:0b9ca8ea62df97ffbdae1028e156966f82edd366
Intrinsic-Extrinsic Convolution and Pooling for Learning on 3D Protein Structures
1 INTRODUCTION Proteins perform specific biological functions essential for all living organisms and hence play a key role when investigating the most fundamental questions in the life sciences . These biomolecules are composed of one or several chains of amino acids , which fold into specific conformations to enable various biological functionalities . Proteins can be defined using a multi-level structure : : The primary structure is given by the sequence of amino acids that are connected through covalent bonds and form the protein backbone . Hydrogen bonds between distant amino acids in the chain form the secondary structure , which defines substructures such as α-helices and β-sheets . The tertiary structure results from protein folding and expresses the 3D spatial arrangement of the secondary structures . Lastly , the quarternary structure is given by the interaction of multiple amino acid chains . Considering only one subset of these levels can lead to misinterpretations due to ambiguities . As shown by Alexander et al . ( 2009 ) , proteins with almost identical primary structure , i.e. , only containing a few different amino acids , can fold into entirely different conformations . Conversely , proteins from SH3 and OB folds have similar tertiary structures , but their primary and secondary structures differ significantly ( Agrawal & Kishan , 2001 ) ( Fig . 1 ) . To avoid misinterpretations arising from these observations , capturing the invariances with respect to primary , secondary , and tertiary structures is of key importance when studying proteins and their functions . Previously , the SOTA was dominated by methods based on hand-crafted features , usually extracted from multi-sequence alignment tools ( Altschul et al. , 1990 ) or annotated databases ( El-Gebali et al. , 2019 ) . In recent years , these have been outperformed by protein learning algorithms in different protein modeling tasks such as protein fold classification ( Hou et al. , 2018 ; Rao et al. , 2019 ; Bepler & Berger , 2019 ; Alley et al. , 2019 ; Min et al. , 2020 ) or protein function prediction ( Strodthoff et al. , 2020 ; Gligorijevic et al. , 2019 ; Kulmanov et al. , 2017 ; Kulmanov & Hoehndorf , 2019 ; Amidi et al. , 2017 ) . This can be attributed to the ability of machine learning algorithms to learn meaningful representations of proteins directly from the raw data . However , most of these techniques only consider a subset of the relevant structural levels of proteins and thus can only create a representation from partial information . For instance , due to the high amount of available protein sequence data , most techniques solely rely on protein sequence data as input , and apply learning algorithms from the field of natural language processing ( Rao et al. , 2019 ; Alley et al. , 2019 ; Min et al. , 2020 ; Strodthoff et al. , 2020 ) , 1D convolutional neural networks ( Kulmanov et al. , 2017 ; Kulmanov & Hoehndorf , 2019 ) , or use structural information during training ( Bepler & Berger , 2019 ) . Other methods have solely used 3D atomic coordinates as an input , and applied 3D convolutional neural networks ( 3DCNN ) ( Amidi et al. , 2017 ; Derevyanko et al. , 2018 ) or graph convolutional neural networks ( GCNN ) ( Kipf & Welling , 2017 ) . While few attempts have been made to consider more than one structural level of proteins in the network architecture ( Gligorijevic et al. , 2019 ) , none of these hybrid methods incorporate all structural levels of proteins simultaneously . In contrast , a common approach is to process one structural level with the network architecture and the others indirectly as input features ( Baldassarre et al . ( 2020 ) or Hou et al . ( 2018 ) ) . In this paper , we introduce a novel end-to-end protein learning algorithm , that is able to explicitly incorporate the multi-level structure of proteins and captures the resulting different invariances . We show how a multi-graph data structure can represent the primary and secondary structures effectively by considering covalent and hydrogen bonds , while the tertiary structure can be represented by the spatial 3D coordinates of the atoms ( Sec . 3 ) . By borrowing terminology from differential geometry of surfaces , we define a new convolution operator that uses both intrinsic ( primary and secondary structures ) and extrinsic ( tertiary and quaternary structures ) distances ( Sec . 4 ) . Moreover , since protein sizes range from less than one hundred to tens of thousands of amino acids ( Brocchieri & Karlin , 2005 ) , we propose protein-specific pooling operations that allow hierarchical grouping of such a wide range of sizes , enabling the detection of features at different scales ( Sec . 5 ) . Lastly , we demonstrate , that by considering all mentioned protein structure levels , we can significantly outperform recent SOTA methods on protein tasks , such as protein fold and enzyme classification . Code and data of our approach is available at https : //github.com/phermosilla/ IEConv_proteins . 2 RELATED WORK . Early works on learning protein representations ( Asgari & Mofrad , 2015 ; Yang et al. , 2018 ) used word embedding algorithms ( Mikolov et al. , 2013 ) , as employed in Natural Language Processing ( NLP ) . Other approaches have used 1D convolutional neural networks ( CNN ) to learn protein representations directly from an amino acid sequence , for tasks such as protein function prediction ( Kulmanov et al. , 2017 ; Kulmanov & Hoehndorf , 2019 ) , protein-compound interaction ( Tsubaki et al. , 2018 ) , or protein fold classification ( Hou et al. , 2018 ) . Recently , researchers have applied complex NLP models trained unsupervised on millions of unlabeled protein sequences and fine-tune them for different downstream tasks ( Rao et al. , 2019 ; Alley et al. , 2019 ; Min et al. , 2020 ; Strodthoff et al. , 2020 ; Bepler & Berger , 2019 ) . While representing proteins as amino acid sequences during learning , is helpful when only sequence data is available , it does not leverage the full potential of spatial protein representations that become more and more available with modern imaging and reconstruction techniques . To learn beyond sequences , approaches have been developed , that consider the 3D structure of proteins . A range of methods has sampled protein structures to regular volumetric 3D representations and assessed the quality of the structure ( Derevyanko et al. , 2018 ) , classified proteins in enzymes classes ( Amidi et al. , 2017 ) , predicted the protein-ligand binding affinity ( Ragoza et al. , 2017 ) and the binding site ( Jiménez et al. , 2017 ) , as well as the contact region between two proteins ( Townshend et al. , 2019 ) . While this is attractive , as 3D grids allow for unleashing the benefits of all approaches developed for 2D images , such as pooling and multi-resolution techniques , unfortunately , grids do not scale well to fine structures or many atoms , and even more importantly , they do not consider the primary and secondary structure of proteins . Another approach that makes use of a protein ’ s 3D structure , is representing proteins as graphs and applying GCNNs ( Kipf & Welling , 2017 ; Hamilton et al. , 2017 ) . Works based on this technique represent each amino acid as a node in the graph , while edges between them are created if they are at a certain Euclidean distance . This approach has been successfully applied to different problems . Classification of protein graphs into enzymes , for example , have become part of the standard data sets used to compare GCNN architectures ( Gao & Ji , 2019 ; Ying et al. , 2018 ) . Moreover , other works with similar architectures have predicted protein interfaces ( Fout et al. , 2017 ) , or protein structure quality ( Baldassarre et al. , 2020 ) . However , GCNN approaches suffer from over-smoothing , i. e. , indistinguishable node representations after stacking several layers , which limits the maximum depth usable for such architectures ( Cai & Wang , 2020 ) . It is also worth noticing that some of the aforementioned works on GCNN have considered different levels of protein structures indirectly by providing secondary structure type or distance along the sequence as initial node or edge features . However , these are not part of the network architecture and can be blended out due to the over-smoothing problem of GCNN . On the other hand , a recent protein function prediction method proposed by Gligorijevic et al . ( 2019 ) uses Long-Short Term Memory cells ( LSTM ) to encode the primary structure and then apply GCNNs to capture the tertiary structure . Also , the recent work from Ingraham et al . ( 2019 ) proposes an amino acid encoder that can capture primary and tertiary structures in the context of protein generative models . Unfortunately , none of these previous methods can incorporate all structural protein levels within the network architecture . 3 MULTI-GRAPH PROTEIN REPRESENTATION . To simultaneously take into account the primary , secondary , and tertiary protein structure during learning , we propose to represent proteins as a multi-graph G = ( N , F , A , B ) . In this graph , atoms are represented as nodes associated with their 3D coordinates , N ∈ Rn×3 , and associated features , F ∈ Rn×t , n being the number of atoms , and t the number of features . Moreover , A ∈ Rn×n and B ∈ Rn×n are two different adjacency matrices representing the connectivity of the graph . Elements of matrix A are defined as Aij = 1 if there is a covalent bond between atom i and atom j , and Aij = 0 otherwise . Similarly , the elements of matrix B are defined as Bij = 1 if there is a covalent or hydrogen bond between atom i and atom j , and Bij = 0 otherwise . 3.1 INTRINSIC-EXTRINSIC DISTANCES Differential geometry of surfaces ( Pogorelov , 1973 ) defines intrinsic geometric properties as those , that are invariant under isometric mappings , i.e. , under deformations preserving the length of curves on a surface . On the other hand , extrinsic geometric properties are dependent on the embedding of the surfaces into the Euclidean space . Analogously , in our protein multi-graph , we define intrinsic geometric properties as those that are invariant under deformations preserving the length of paths along the graph , i.e. , deformations that preserve the connectivity of the protein . Additionally , we define extrinsic geometric properties as those , that depend on the embedding of the protein into the Euclidean space , i.e. , on the 3D protein conformation . Using this terminology , we define three distances in our multi-graph , one extrinsic and two intrinsic ( see Fig . 2 ) . The extrinsic distance τe is defined by the protein conformation in Euclidean space , therefore we use the Euclidean distance between atoms , which enables us to capture the tertiary and quaternary structures of the protein . The intrinsic distances are inherent of the protein and independent of the actual 3D conformation . For the first intrinsic distance τi1 we use the shortest path between two atoms along the adjacency matrix A of the graph , capturing the primary structure . The second intrinsic distance τi2 is defined as the shortest path between two atoms along the adjacency matrix B , capturing thus the secondary structure .
The authors describe a method to transform 3D protein structures for supervised machine learning. Their method introduces a convolution operation that considers both the intrinsic distances between atoms as defined by their bond structure and the extrinsic distances as defined by 3D proximity. They also introduce interpretable pooling operations developed using known biology of the amino acids. Overall, the method is effective and straightforward to follow due to having avoided unnecessary complexity. The figures greatly aid the reader.
SP:913965fb0d96422d092b90304a652255f57f2a3c
Intrinsic-Extrinsic Convolution and Pooling for Learning on 3D Protein Structures
1 INTRODUCTION Proteins perform specific biological functions essential for all living organisms and hence play a key role when investigating the most fundamental questions in the life sciences . These biomolecules are composed of one or several chains of amino acids , which fold into specific conformations to enable various biological functionalities . Proteins can be defined using a multi-level structure : : The primary structure is given by the sequence of amino acids that are connected through covalent bonds and form the protein backbone . Hydrogen bonds between distant amino acids in the chain form the secondary structure , which defines substructures such as α-helices and β-sheets . The tertiary structure results from protein folding and expresses the 3D spatial arrangement of the secondary structures . Lastly , the quarternary structure is given by the interaction of multiple amino acid chains . Considering only one subset of these levels can lead to misinterpretations due to ambiguities . As shown by Alexander et al . ( 2009 ) , proteins with almost identical primary structure , i.e. , only containing a few different amino acids , can fold into entirely different conformations . Conversely , proteins from SH3 and OB folds have similar tertiary structures , but their primary and secondary structures differ significantly ( Agrawal & Kishan , 2001 ) ( Fig . 1 ) . To avoid misinterpretations arising from these observations , capturing the invariances with respect to primary , secondary , and tertiary structures is of key importance when studying proteins and their functions . Previously , the SOTA was dominated by methods based on hand-crafted features , usually extracted from multi-sequence alignment tools ( Altschul et al. , 1990 ) or annotated databases ( El-Gebali et al. , 2019 ) . In recent years , these have been outperformed by protein learning algorithms in different protein modeling tasks such as protein fold classification ( Hou et al. , 2018 ; Rao et al. , 2019 ; Bepler & Berger , 2019 ; Alley et al. , 2019 ; Min et al. , 2020 ) or protein function prediction ( Strodthoff et al. , 2020 ; Gligorijevic et al. , 2019 ; Kulmanov et al. , 2017 ; Kulmanov & Hoehndorf , 2019 ; Amidi et al. , 2017 ) . This can be attributed to the ability of machine learning algorithms to learn meaningful representations of proteins directly from the raw data . However , most of these techniques only consider a subset of the relevant structural levels of proteins and thus can only create a representation from partial information . For instance , due to the high amount of available protein sequence data , most techniques solely rely on protein sequence data as input , and apply learning algorithms from the field of natural language processing ( Rao et al. , 2019 ; Alley et al. , 2019 ; Min et al. , 2020 ; Strodthoff et al. , 2020 ) , 1D convolutional neural networks ( Kulmanov et al. , 2017 ; Kulmanov & Hoehndorf , 2019 ) , or use structural information during training ( Bepler & Berger , 2019 ) . Other methods have solely used 3D atomic coordinates as an input , and applied 3D convolutional neural networks ( 3DCNN ) ( Amidi et al. , 2017 ; Derevyanko et al. , 2018 ) or graph convolutional neural networks ( GCNN ) ( Kipf & Welling , 2017 ) . While few attempts have been made to consider more than one structural level of proteins in the network architecture ( Gligorijevic et al. , 2019 ) , none of these hybrid methods incorporate all structural levels of proteins simultaneously . In contrast , a common approach is to process one structural level with the network architecture and the others indirectly as input features ( Baldassarre et al . ( 2020 ) or Hou et al . ( 2018 ) ) . In this paper , we introduce a novel end-to-end protein learning algorithm , that is able to explicitly incorporate the multi-level structure of proteins and captures the resulting different invariances . We show how a multi-graph data structure can represent the primary and secondary structures effectively by considering covalent and hydrogen bonds , while the tertiary structure can be represented by the spatial 3D coordinates of the atoms ( Sec . 3 ) . By borrowing terminology from differential geometry of surfaces , we define a new convolution operator that uses both intrinsic ( primary and secondary structures ) and extrinsic ( tertiary and quaternary structures ) distances ( Sec . 4 ) . Moreover , since protein sizes range from less than one hundred to tens of thousands of amino acids ( Brocchieri & Karlin , 2005 ) , we propose protein-specific pooling operations that allow hierarchical grouping of such a wide range of sizes , enabling the detection of features at different scales ( Sec . 5 ) . Lastly , we demonstrate , that by considering all mentioned protein structure levels , we can significantly outperform recent SOTA methods on protein tasks , such as protein fold and enzyme classification . Code and data of our approach is available at https : //github.com/phermosilla/ IEConv_proteins . 2 RELATED WORK . Early works on learning protein representations ( Asgari & Mofrad , 2015 ; Yang et al. , 2018 ) used word embedding algorithms ( Mikolov et al. , 2013 ) , as employed in Natural Language Processing ( NLP ) . Other approaches have used 1D convolutional neural networks ( CNN ) to learn protein representations directly from an amino acid sequence , for tasks such as protein function prediction ( Kulmanov et al. , 2017 ; Kulmanov & Hoehndorf , 2019 ) , protein-compound interaction ( Tsubaki et al. , 2018 ) , or protein fold classification ( Hou et al. , 2018 ) . Recently , researchers have applied complex NLP models trained unsupervised on millions of unlabeled protein sequences and fine-tune them for different downstream tasks ( Rao et al. , 2019 ; Alley et al. , 2019 ; Min et al. , 2020 ; Strodthoff et al. , 2020 ; Bepler & Berger , 2019 ) . While representing proteins as amino acid sequences during learning , is helpful when only sequence data is available , it does not leverage the full potential of spatial protein representations that become more and more available with modern imaging and reconstruction techniques . To learn beyond sequences , approaches have been developed , that consider the 3D structure of proteins . A range of methods has sampled protein structures to regular volumetric 3D representations and assessed the quality of the structure ( Derevyanko et al. , 2018 ) , classified proteins in enzymes classes ( Amidi et al. , 2017 ) , predicted the protein-ligand binding affinity ( Ragoza et al. , 2017 ) and the binding site ( Jiménez et al. , 2017 ) , as well as the contact region between two proteins ( Townshend et al. , 2019 ) . While this is attractive , as 3D grids allow for unleashing the benefits of all approaches developed for 2D images , such as pooling and multi-resolution techniques , unfortunately , grids do not scale well to fine structures or many atoms , and even more importantly , they do not consider the primary and secondary structure of proteins . Another approach that makes use of a protein ’ s 3D structure , is representing proteins as graphs and applying GCNNs ( Kipf & Welling , 2017 ; Hamilton et al. , 2017 ) . Works based on this technique represent each amino acid as a node in the graph , while edges between them are created if they are at a certain Euclidean distance . This approach has been successfully applied to different problems . Classification of protein graphs into enzymes , for example , have become part of the standard data sets used to compare GCNN architectures ( Gao & Ji , 2019 ; Ying et al. , 2018 ) . Moreover , other works with similar architectures have predicted protein interfaces ( Fout et al. , 2017 ) , or protein structure quality ( Baldassarre et al. , 2020 ) . However , GCNN approaches suffer from over-smoothing , i. e. , indistinguishable node representations after stacking several layers , which limits the maximum depth usable for such architectures ( Cai & Wang , 2020 ) . It is also worth noticing that some of the aforementioned works on GCNN have considered different levels of protein structures indirectly by providing secondary structure type or distance along the sequence as initial node or edge features . However , these are not part of the network architecture and can be blended out due to the over-smoothing problem of GCNN . On the other hand , a recent protein function prediction method proposed by Gligorijevic et al . ( 2019 ) uses Long-Short Term Memory cells ( LSTM ) to encode the primary structure and then apply GCNNs to capture the tertiary structure . Also , the recent work from Ingraham et al . ( 2019 ) proposes an amino acid encoder that can capture primary and tertiary structures in the context of protein generative models . Unfortunately , none of these previous methods can incorporate all structural protein levels within the network architecture . 3 MULTI-GRAPH PROTEIN REPRESENTATION . To simultaneously take into account the primary , secondary , and tertiary protein structure during learning , we propose to represent proteins as a multi-graph G = ( N , F , A , B ) . In this graph , atoms are represented as nodes associated with their 3D coordinates , N ∈ Rn×3 , and associated features , F ∈ Rn×t , n being the number of atoms , and t the number of features . Moreover , A ∈ Rn×n and B ∈ Rn×n are two different adjacency matrices representing the connectivity of the graph . Elements of matrix A are defined as Aij = 1 if there is a covalent bond between atom i and atom j , and Aij = 0 otherwise . Similarly , the elements of matrix B are defined as Bij = 1 if there is a covalent or hydrogen bond between atom i and atom j , and Bij = 0 otherwise . 3.1 INTRINSIC-EXTRINSIC DISTANCES Differential geometry of surfaces ( Pogorelov , 1973 ) defines intrinsic geometric properties as those , that are invariant under isometric mappings , i.e. , under deformations preserving the length of curves on a surface . On the other hand , extrinsic geometric properties are dependent on the embedding of the surfaces into the Euclidean space . Analogously , in our protein multi-graph , we define intrinsic geometric properties as those that are invariant under deformations preserving the length of paths along the graph , i.e. , deformations that preserve the connectivity of the protein . Additionally , we define extrinsic geometric properties as those , that depend on the embedding of the protein into the Euclidean space , i.e. , on the 3D protein conformation . Using this terminology , we define three distances in our multi-graph , one extrinsic and two intrinsic ( see Fig . 2 ) . The extrinsic distance τe is defined by the protein conformation in Euclidean space , therefore we use the Euclidean distance between atoms , which enables us to capture the tertiary and quaternary structures of the protein . The intrinsic distances are inherent of the protein and independent of the actual 3D conformation . For the first intrinsic distance τi1 we use the shortest path between two atoms along the adjacency matrix A of the graph , capturing the primary structure . The second intrinsic distance τi2 is defined as the shortest path between two atoms along the adjacency matrix B , capturing thus the secondary structure .
This paper describes a deep learning architecture for representing and performing classifications on protein structures. The representation involves three different distances: Euclidean distance and the shorted path between two atoms, where edges are either along covalent bonds or also include hydrogen bonds. Each atom has a vector of associated features, and convolution is accomplished by defining a kernel on all three distances and then summing the features of each neighboring atom, weighted by the kernel value. The paper also proposes three protein-specific pooling operations to cope with the large input size when representing all atoms in a protein.
SP:913965fb0d96422d092b90304a652255f57f2a3c
Answering Complex Open-Domain Questions with Multi-Hop Dense Retrieval
1 INTRODUCTION . Open domain question answering is a challenging task where the answer to a given question needs to be extracted from a large pool of documents . The prevailing approach ( Chen et al. , 2017 ) tackles the problem in two stages . Given a question , a retriever first produces a list of k candidate documents , and a reader then extracts the answer from this set . Until recently , retrieval models were dependent on traditional term-based information retrieval ( IR ) methods , which fail to capture the semantics of the question beyond lexical matching and remain a major performance bottleneck for the task . Recent work on dense retrieval methods instead uses pretrained encoders to cast the question and documents into dense representations in a vector space and relies on fast maximum inner-product search ( MIPS ) to complete the retrieval . These approaches ( Lee et al. , 2019 ; Guu et al. , 2020 ; Karpukhin et al. , 2020 ) have demonstrated significant retrieval improvements over traditional IR baselines . However , such methods remain limited to simple questions , where the answer to the question is explicit in a single piece of text evidence . In contrast , complex questions typically involve aggregating information from multiple documents , requiring logical reasoning or sequential ( multihop ) processing in order to infer the answer ( see Figure 1 for an example ) . Since the process for answering such questions might be sequential in nature , single-shot approaches to retrieval are insufficient . Instead , iterative methods are needed to recursively retrieve new information at each step , conditioned on the information already at hand . Beyond further expanding the scope of existing textual open-domain QA systems , answering more complex questions usually involves multi-hop reasoning , which poses unique challenges for existing neural-based AI systems . With its practical ∗Equal Contribution 1https : //github.com/facebookresearch/multihop_dense_retrieval . and research values , multi-hop QA has been extensively studied recently ( Talmor & Berant , 2018 ; Yang et al. , 2018 ; Welbl et al. , 2018 ) and remains an active research area in NLP ( Qi et al. , 2019 ; Nie et al. , 2019 ; Min et al. , 2019 ; Zhao et al. , 2020 ; Asai et al. , 2020 ; Perez et al. , 2020 ) . The main problem in answering multi-hop open-domain questions is that the search space grows exponentially with each retrieval hop . Most recent work tackles this issue by constructing a document graph utilizing either entity linking or existing hyperlink structure in the underlying Wikipedia corpus ( Nie et al. , 2019 ; Asai et al. , 2020 ) . The problem then becomes finding the best path in this graph , where the search space is bounded by the number of hyperlinks in each passage . However , such methods may not generalize to new domains , where entity linking might perform poorly , or where hyperlinks might not be as abundant as in Wikipedia . Moreover , efficiency remains a challenge despite using these data-dependent pruning heuristics , with the best model ( Asai et al. , 2020 ) needing hundreds of calls to large pretrained models to produce a single answer . In contrast , we propose to employ dense retrieval to the multi-hop setting with a simple recursive framework . Our method iteratively encodes the question and previously retrieved documents as a query vector and retrieves the next relevant documents using efficient MIPS methods . With highquality , dense representations derived from strong pretrained encoders , our work first demonstrates that the sequence of documents that provide sufficient information to answer the multi-hop question can be accurately discovered from unstructured text , without the help of corpus-specific hyperlinks . When evaluated on two multi-hop benchmarks , HotpotQA ( Yang et al. , 2018 ) and a multi-evidence subset of FEVER ( Thorne et al. , 2018 ) , our approach improves greatly over the traditional linkingbased retrieval methods . More importantly , the better retrieval results also lead to state-of-the-art downstream results on both datasets . On HotpotQA , we demonstrate a vastly improved efficiencyaccuracy trade-off achieved by our system : by limiting the amount of retrieved contexts fed into downstream models , our system can match the best published result while being 10x faster . 2 METHOD . 2.1 PROBLEM DEFINITION . The retrieval task considered in this work can be described as follows ( see also Figure 1 ) . Given a multi-hop question q and a large text corpus C , the retrieval module needs to retrieve a sequence of passages Pseq : { p1 , p2 , ... , pn } that provide sufficient information for answering q . Practically , the retriever returns the k best-scoring sequence candidates , { P1seq , P2seq , ... , Pkseq } ( k |C| ) , with the hope that at least one of them has the desired qualities . k should be small enough for downstream modules to process in a reasonable time while maintaining adequate recall . In general , retrieval also needs to be efficient enough to handle real-world corpora containing millions of documents . 2.2 MULTI-HOP DENSE RETRIEVAL . Model Based on the sequential nature of the multi-hop retrieval problem , our system solves it in an iterative fashion . We model the probability of selecting a certain passage sequence as follows : P ( Pseq|q ) = n∏ t=1 P ( pt|q , p1 , ... , pt−1 ) , where for t = 1 , we only condition on the original question for retrieval . At each retrieval step , we construct a new query representation based on previous results and the retrieval is implemented as maximum inner product search over the dense representations of the whole corpus : P ( pt|q , p1 , ... , pt−1 ) = exp ( 〈pt , qt〉 ) ∑ p∈C exp ( 〈p , qt〉 ) , where qt = g ( q , p1 , ... , pt−1 ) and pt = h ( pt ) . Here 〈· , ·〉 is the inner product between the query and passage vectors . h ( · ) and and g ( · ) are passage and query encoders that produce the dense representations . In order to reformulate the query representation to account for previous retrieval results at time step t , we simply concatenate the question and the retrieved passages as the inputs to g ( · ) . Note that our formulation for each retrieval step is similar to existing single-hop dense retrieval methods ( Lee et al. , 2019 ; Guu et al. , 2020 ; Karpukhin et al. , 2020 ) except that we add the query reformulation process conditioned on previous retrieval results . Additionally , instead of using a bi-encoder architecture with separately parameterized encoders for queries and passages , we use a shared RoBERTa-base ( Liu et al. , 2019 ) encoder for both h ( · ) and g ( · ) . In §3.1.3 , we show this simple modification yields considerable improvements . Specifically , we apply layer normalization over the start token ’ s representations from RoBERTa to get the final dense query/passage vectors . Training and Inference The retriever model is trained as in Karpukhin et al . ( 2020 ) , where each input query ( which at each step consists of a question and previously retrieved passages ) is paired with a positive passage and m negative passages to approximate the softmax over all passages . The positive passage is the gold annotated evidence at step t. Negative passages are a combination of passages in the current batch which correspond to other questions ( in-batch ) , and hard negatives which are false adversarial passages . In our experiments , we obtain hard negatives from TF-IDF retrieved passages and their linked pages in Wikipedia . We note that using hyperlinked pages as additional negatives is neither necessary nor critical for our approach . In fact we observe only a very small degradation in performance if we remove them from training ( §3.1.3 ) . In addition to in-batch negatives , we use a memory bank ( M ) mechanism ( Wu et al. , 2018 ) to further increase the number of negative examples for each question . The memory bank stores a large number of dense passage vectors . As we block the gradient back-propagation in the memory bank , its size ( |M| batch size ) is less restricted by the GPU memory size . Specifically , after training to convergence with the shared encoder , we freeze a copy of the encoder as the new passage encoder and collect a bank of passage representations across multiple batches to serve as the set of negative passages . This simple extension results in further improvement in retrieval . ( §3.1.3 ) . For inference , we first encode the whole corpus into an index of passage vectors . Given a question , we use beam search to obtain top-k passage sequence candidates , where the candidates to beam search at each step are generated by MIPS using the query encoder at step t , and the beams are scored by the sum of inner products as suggested by the probabilistic formulation discussed above . Such inference relies only on the dense passage index and the query representations , and does not need explicit graph construction using hyperlinks or entity linking . The top-k sequences will then be fed into task-specific downstream modules to produce the desired outputs . 3 EXPERIMENTS . Datasets Our experiments focus on two datasets : HotpotQA and Multi-evidence FEVER . HotpotQA ( Yang et al. , 2018 ) includes 113k multi-hop questions . Unlike other multi-hop QA datasets ( Zhang et al. , 2018 ; Talmor & Berant , 2018 ; Welbl et al. , 2018 ) , where the information sources of the answers are knowledge bases , HotpotQA uses documents in Wikipedia . Thus , its questions are not restricted by the fixed KB schema and can cover more diverse topics . Each question in HotpotQA is also provided with ground truth support passages , which enables us to evaluate the intermediate retrieval performance . Multi-evidence FEVER includes 20k claims from the FEVER ( Thorne et al. , 2018 ) fact verification dataset , where the claims can only be verified using multiple documents . We use this dataset to validate the general applicability of our method . Implementation Details All the experiments are conducted on a machine with 8 32GB V100 GPUs . Our code is based on Huggingface Transformers ( Wolf et al. , 2019 ) . Our best retrieval results are predicted using the exact inner product search index ( IndexFlatIP ) in FAISS ( Johnson et al. , 2017 ) . Both datasets assume 2 hops , so we fix n = 2 for all experiments . Since HotpotQA does not provide the order of the passage sequences , as a heuristic , we consider the passage that includes the answer span as the final passage . 2 In §3.1.3 , we show that the order of the passages is important for effective retriever training . The hyperparameters can be found in Appendix B.1 . 3.1 EXPERIMENTS : RETRIEVAL . We evaluate our multi-hop dense retriever ( MDR ) in two different use cases : direct and reranking , where the former outputs the top-k results directly using the retriever scores and the latter applies a task-specific reranking model to the initial results from MDR . 3.1.1 DIRECT . We first compare MDR with several efficient retrieval methods that can directly find the top-k passage sequences from a large corpus , including TF-IDF , TF-IDF + Linked , DrKIT and Entity Linking . TFIDF is the standard term-matching baseline , while TF-IDF + Linked is a straightforward extension that also extracts the hyperlinked passages from TF-IDF passages , and then reranks both TF-IDF and hyperlinked passages with BM25 3 scores . DrKIT ( Dhingra et al. , 2020 ) is a recently proposed dense retrieval approach , which builds a entity-level ( mentions of entities ) dense index for retrieval . It relies on hyperlinks to extract entity mentions and prunes the search space with a binary mask that restricts the next hop to using hyperlinked entities . On FEVER , we additionally consider an entity linking baseline ( Hanselowski et al. , 2018 ) that is commonly used in existing fact verification pipelines . This baseline first uses a constituency parser to extract potential entity mentions in the fact claim and then uses the MediaWiki API to search documents with titles that match the mentions . Table 1 shows the performance of different retrieval methods . On HotpotQA the metric is recall at the top k paragraphs4 , while on FEVER the metrics are precision , recall and F1 in order to be consistent with previous results . On both datasets , MDR substantially outperforms all baselines .
This paper proposes multi-hop dense retrieval for open-domain multi-hop question answering. It extends previous dense passage retrieval into the corresponding multi-hop version by using retrieved passages to latently reformulate the query representation after each retrieval pass. In the end, it can significantly improve the performance on HotpotQA and multi-evidence FEVER dataset. The analyses are very comprehensive and extensive from almost every relevant perspective.
SP:facb7e43da318900edf3d247467a45c3d3ae7d42
Answering Complex Open-Domain Questions with Multi-Hop Dense Retrieval
1 INTRODUCTION . Open domain question answering is a challenging task where the answer to a given question needs to be extracted from a large pool of documents . The prevailing approach ( Chen et al. , 2017 ) tackles the problem in two stages . Given a question , a retriever first produces a list of k candidate documents , and a reader then extracts the answer from this set . Until recently , retrieval models were dependent on traditional term-based information retrieval ( IR ) methods , which fail to capture the semantics of the question beyond lexical matching and remain a major performance bottleneck for the task . Recent work on dense retrieval methods instead uses pretrained encoders to cast the question and documents into dense representations in a vector space and relies on fast maximum inner-product search ( MIPS ) to complete the retrieval . These approaches ( Lee et al. , 2019 ; Guu et al. , 2020 ; Karpukhin et al. , 2020 ) have demonstrated significant retrieval improvements over traditional IR baselines . However , such methods remain limited to simple questions , where the answer to the question is explicit in a single piece of text evidence . In contrast , complex questions typically involve aggregating information from multiple documents , requiring logical reasoning or sequential ( multihop ) processing in order to infer the answer ( see Figure 1 for an example ) . Since the process for answering such questions might be sequential in nature , single-shot approaches to retrieval are insufficient . Instead , iterative methods are needed to recursively retrieve new information at each step , conditioned on the information already at hand . Beyond further expanding the scope of existing textual open-domain QA systems , answering more complex questions usually involves multi-hop reasoning , which poses unique challenges for existing neural-based AI systems . With its practical ∗Equal Contribution 1https : //github.com/facebookresearch/multihop_dense_retrieval . and research values , multi-hop QA has been extensively studied recently ( Talmor & Berant , 2018 ; Yang et al. , 2018 ; Welbl et al. , 2018 ) and remains an active research area in NLP ( Qi et al. , 2019 ; Nie et al. , 2019 ; Min et al. , 2019 ; Zhao et al. , 2020 ; Asai et al. , 2020 ; Perez et al. , 2020 ) . The main problem in answering multi-hop open-domain questions is that the search space grows exponentially with each retrieval hop . Most recent work tackles this issue by constructing a document graph utilizing either entity linking or existing hyperlink structure in the underlying Wikipedia corpus ( Nie et al. , 2019 ; Asai et al. , 2020 ) . The problem then becomes finding the best path in this graph , where the search space is bounded by the number of hyperlinks in each passage . However , such methods may not generalize to new domains , where entity linking might perform poorly , or where hyperlinks might not be as abundant as in Wikipedia . Moreover , efficiency remains a challenge despite using these data-dependent pruning heuristics , with the best model ( Asai et al. , 2020 ) needing hundreds of calls to large pretrained models to produce a single answer . In contrast , we propose to employ dense retrieval to the multi-hop setting with a simple recursive framework . Our method iteratively encodes the question and previously retrieved documents as a query vector and retrieves the next relevant documents using efficient MIPS methods . With highquality , dense representations derived from strong pretrained encoders , our work first demonstrates that the sequence of documents that provide sufficient information to answer the multi-hop question can be accurately discovered from unstructured text , without the help of corpus-specific hyperlinks . When evaluated on two multi-hop benchmarks , HotpotQA ( Yang et al. , 2018 ) and a multi-evidence subset of FEVER ( Thorne et al. , 2018 ) , our approach improves greatly over the traditional linkingbased retrieval methods . More importantly , the better retrieval results also lead to state-of-the-art downstream results on both datasets . On HotpotQA , we demonstrate a vastly improved efficiencyaccuracy trade-off achieved by our system : by limiting the amount of retrieved contexts fed into downstream models , our system can match the best published result while being 10x faster . 2 METHOD . 2.1 PROBLEM DEFINITION . The retrieval task considered in this work can be described as follows ( see also Figure 1 ) . Given a multi-hop question q and a large text corpus C , the retrieval module needs to retrieve a sequence of passages Pseq : { p1 , p2 , ... , pn } that provide sufficient information for answering q . Practically , the retriever returns the k best-scoring sequence candidates , { P1seq , P2seq , ... , Pkseq } ( k |C| ) , with the hope that at least one of them has the desired qualities . k should be small enough for downstream modules to process in a reasonable time while maintaining adequate recall . In general , retrieval also needs to be efficient enough to handle real-world corpora containing millions of documents . 2.2 MULTI-HOP DENSE RETRIEVAL . Model Based on the sequential nature of the multi-hop retrieval problem , our system solves it in an iterative fashion . We model the probability of selecting a certain passage sequence as follows : P ( Pseq|q ) = n∏ t=1 P ( pt|q , p1 , ... , pt−1 ) , where for t = 1 , we only condition on the original question for retrieval . At each retrieval step , we construct a new query representation based on previous results and the retrieval is implemented as maximum inner product search over the dense representations of the whole corpus : P ( pt|q , p1 , ... , pt−1 ) = exp ( 〈pt , qt〉 ) ∑ p∈C exp ( 〈p , qt〉 ) , where qt = g ( q , p1 , ... , pt−1 ) and pt = h ( pt ) . Here 〈· , ·〉 is the inner product between the query and passage vectors . h ( · ) and and g ( · ) are passage and query encoders that produce the dense representations . In order to reformulate the query representation to account for previous retrieval results at time step t , we simply concatenate the question and the retrieved passages as the inputs to g ( · ) . Note that our formulation for each retrieval step is similar to existing single-hop dense retrieval methods ( Lee et al. , 2019 ; Guu et al. , 2020 ; Karpukhin et al. , 2020 ) except that we add the query reformulation process conditioned on previous retrieval results . Additionally , instead of using a bi-encoder architecture with separately parameterized encoders for queries and passages , we use a shared RoBERTa-base ( Liu et al. , 2019 ) encoder for both h ( · ) and g ( · ) . In §3.1.3 , we show this simple modification yields considerable improvements . Specifically , we apply layer normalization over the start token ’ s representations from RoBERTa to get the final dense query/passage vectors . Training and Inference The retriever model is trained as in Karpukhin et al . ( 2020 ) , where each input query ( which at each step consists of a question and previously retrieved passages ) is paired with a positive passage and m negative passages to approximate the softmax over all passages . The positive passage is the gold annotated evidence at step t. Negative passages are a combination of passages in the current batch which correspond to other questions ( in-batch ) , and hard negatives which are false adversarial passages . In our experiments , we obtain hard negatives from TF-IDF retrieved passages and their linked pages in Wikipedia . We note that using hyperlinked pages as additional negatives is neither necessary nor critical for our approach . In fact we observe only a very small degradation in performance if we remove them from training ( §3.1.3 ) . In addition to in-batch negatives , we use a memory bank ( M ) mechanism ( Wu et al. , 2018 ) to further increase the number of negative examples for each question . The memory bank stores a large number of dense passage vectors . As we block the gradient back-propagation in the memory bank , its size ( |M| batch size ) is less restricted by the GPU memory size . Specifically , after training to convergence with the shared encoder , we freeze a copy of the encoder as the new passage encoder and collect a bank of passage representations across multiple batches to serve as the set of negative passages . This simple extension results in further improvement in retrieval . ( §3.1.3 ) . For inference , we first encode the whole corpus into an index of passage vectors . Given a question , we use beam search to obtain top-k passage sequence candidates , where the candidates to beam search at each step are generated by MIPS using the query encoder at step t , and the beams are scored by the sum of inner products as suggested by the probabilistic formulation discussed above . Such inference relies only on the dense passage index and the query representations , and does not need explicit graph construction using hyperlinks or entity linking . The top-k sequences will then be fed into task-specific downstream modules to produce the desired outputs . 3 EXPERIMENTS . Datasets Our experiments focus on two datasets : HotpotQA and Multi-evidence FEVER . HotpotQA ( Yang et al. , 2018 ) includes 113k multi-hop questions . Unlike other multi-hop QA datasets ( Zhang et al. , 2018 ; Talmor & Berant , 2018 ; Welbl et al. , 2018 ) , where the information sources of the answers are knowledge bases , HotpotQA uses documents in Wikipedia . Thus , its questions are not restricted by the fixed KB schema and can cover more diverse topics . Each question in HotpotQA is also provided with ground truth support passages , which enables us to evaluate the intermediate retrieval performance . Multi-evidence FEVER includes 20k claims from the FEVER ( Thorne et al. , 2018 ) fact verification dataset , where the claims can only be verified using multiple documents . We use this dataset to validate the general applicability of our method . Implementation Details All the experiments are conducted on a machine with 8 32GB V100 GPUs . Our code is based on Huggingface Transformers ( Wolf et al. , 2019 ) . Our best retrieval results are predicted using the exact inner product search index ( IndexFlatIP ) in FAISS ( Johnson et al. , 2017 ) . Both datasets assume 2 hops , so we fix n = 2 for all experiments . Since HotpotQA does not provide the order of the passage sequences , as a heuristic , we consider the passage that includes the answer span as the final passage . 2 In §3.1.3 , we show that the order of the passages is important for effective retriever training . The hyperparameters can be found in Appendix B.1 . 3.1 EXPERIMENTS : RETRIEVAL . We evaluate our multi-hop dense retriever ( MDR ) in two different use cases : direct and reranking , where the former outputs the top-k results directly using the retriever scores and the latter applies a task-specific reranking model to the initial results from MDR . 3.1.1 DIRECT . We first compare MDR with several efficient retrieval methods that can directly find the top-k passage sequences from a large corpus , including TF-IDF , TF-IDF + Linked , DrKIT and Entity Linking . TFIDF is the standard term-matching baseline , while TF-IDF + Linked is a straightforward extension that also extracts the hyperlinked passages from TF-IDF passages , and then reranks both TF-IDF and hyperlinked passages with BM25 3 scores . DrKIT ( Dhingra et al. , 2020 ) is a recently proposed dense retrieval approach , which builds a entity-level ( mentions of entities ) dense index for retrieval . It relies on hyperlinks to extract entity mentions and prunes the search space with a binary mask that restricts the next hop to using hyperlinked entities . On FEVER , we additionally consider an entity linking baseline ( Hanselowski et al. , 2018 ) that is commonly used in existing fact verification pipelines . This baseline first uses a constituency parser to extract potential entity mentions in the fact claim and then uses the MediaWiki API to search documents with titles that match the mentions . Table 1 shows the performance of different retrieval methods . On HotpotQA the metric is recall at the top k paragraphs4 , while on FEVER the metrics are precision , recall and F1 in order to be consistent with previous results . On both datasets , MDR substantially outperforms all baselines .
This paper extends the recently proposed dense retrieval methods to the multi-hop open-domain questions, so as to handle complex multi-hop queries. The overall idea is simple, direct but effective. The authors conduct extensive experiments on two multi-hop datasets, HotpotQA and multi-evidence FEVER, and evaluation results demonstrate that the proposed model achieves impressive results on both the knowledge retrieval task and multi-hop QA.
SP:facb7e43da318900edf3d247467a45c3d3ae7d42
Post-Training Weighted Quantization of Neural Networks for Language Models
1 INTRODUCTION . Training techniques for deep neural networks ( DNNs ) have been developed in ways to incur a lot of parameter redundancy to expedite seeking local minima ( Denil et al. , 2013 ; Jonathan Frankle , 2019 ) . As a result , various model compression techniques including parameter pruning ( Han et al. , 2015 ; He et al. , 2017 ) , quantization ( Courbariaux et al. , 2015 ; Rastegari et al. , 2016 ) , low-rank approximation ( N. Sainath et al. , 2013 ; Prabhavalkar et al. , 2016 ) , and knowledge distillation ( Hinton et al. , 2015 ; Polino et al. , 2018 ) are proposed to lower storage requirements and improve inference performance . Several compression techniques can be combined in a synergistic way to enhance compression ratio ( Han et al. , 2016 ; Zhu et al. , 2017 ) . In this work , we consider parameter quantization that maintains structured model formats and presents high compression ratio . Note that due to limited hardware resources , quantization is an essential method for any inference systems . In general , quantization is classified into uniform quantization based on fixed-point parameter representations ( Jacob et al. , 2018 ; Han et al. , 2016 ) and non-uniform quantization associated with the binary codes ( Zhou et al. , 2017 ; Rastegari et al. , 2016 ) or codebooks ( Choi et al. , 2017 ; Stock et al. , 2020 ) . Most DNN quantization methods are performed based on the principle of minimizing the mean squared error ( MSE ) of quantized parameters ( Rastegari et al. , 2016 ; Xu et al. , 2018 ; Zhou et al. , 2017 ) . Optimizing the MSE is also an underlying principle of low-rank approximation techniques such as the singular value decomposition ( SVD ) ( Prabhavalkar et al. , 2016 ; N. Sainath et al. , 2013 ) . Note that , however , minimizing the MSE implies that each parameter is equally important ( i.e. , squared errors from parameters are accumulated without considering importance of each weight ) . In practice , the impact of each parameter perturbation from quantization on training loss can be vastly different and such impact needs to be analyzed through a sensitivity study of each parameter toward a change in training loss value . In other words , minimizing the MSE ( or the Euclidean distance between original parameters and quantized parameters ) may not correspond to minimizing training loss function after quantization . Robustness to quantization error of each parameter can be expressed as sensitivity . Sensitivity of i-th parameter wi is the amount of change in the loss function when wi is perturbed . A parameter associated with high sensitivity would require relatively smaller quantization error if quantization is performed in a group manner . Several previous works acknowledge distinct sensitivity of each parameter to improve quantization quality . Note that because exact sensitivity estimation of each parameter toward loss function is highly complicated , various heuristic techniques have been introduced . For example , Hessian-weighted k-means clustering is used for codebook-based implementations ( Choi et al. , 2017 ) or Taylor series expansion to bound loss function difference is conducted to decide the optimal quantization bits of each weight ( Khoram & Li , 2018 ) . The Hessian matrix can be used to assign different numbers of quantization bits for each layer ( Dong et al. , 2019 ; Shen et al. , 2019 ) . Minimizing the reconstruction error on the output activations after each layer quantization is performed in ( Stock et al. , 2020 ) . In this paper , we propose a weighted quantization framework where quantized parameters follow the structure of the binary codes so as to achieve high compression ratio and high computational efficiency ( Rastegari et al. , 2016 ; Jeon et al. , 2020 ) . Specifically , given that an importance of each parameter is represented as a real number between 0 and 1 , we extract an optimal quantization solution modified from the previous binary-coding-based quantization methods that employ equal parameter importance . Similar to previous attempts , we also find that calculating accurate importance of each parameter is challenging . As a successful approximation of importance , we suggest that magnitude-based importance estimation is especially effective for post-training non-uniform quantization . 2 POST-TRAINING PARAMETER QUANTIZATION FOR LANGUAGE MODELS . The number of parameters for language models is dramatically increasing ( e.g. , GPT-3 ( Brown et al. , 2020 ) requires 175 billion parameters ) . Correspondingly , model compression for language models is becoming a mandatory process to reduce response time and inference energy . We devise a compression method considering the followings : • Recent language models are usually memory-bound because of small batch size and lacking layers of high reuse ( e.g. , conv layers ) . Thus , reducing memory footprint is critical . • Compression algorithms should be supported by dedicated kernels , designed specifically for language models if possible . • Compression-aware training is challenging and expensive if hyper-parameters are added to already huge language models ( hence , we choose a post-training method . ) Fixed-point inference using uniform quantization is not desirable for language models because of noticeable accuracy degradation ( Shen et al. , 2019 ; Jeon et al. , 2020 ) while the advantage of small computational units ( e.g. , INT8 MAC ) is insignificant for memory-bound applications . Thus , we adopt float-based parameter quantization ( i.e. , expected values of quantized parameters remains to be of full precision ) that induce a lot smaller number of quantization bits compared to fixed-point quantization ( Xu et al. , 2018 ; Stock et al. , 2020 ) . Recently , a kernel library , called BiQGEMM ( Jeon et al. , 2020 ) , was introduced to support binarycoding-based quantization techniques to accelerate quantized neural networks . Using lookup tables , BiQGEMM enables byte-level memory accesses and achieves 8.3× run-time memory footprints and 3.5× speed up with a mobile CPU for Transformer ( Chung et al. , 2020 ) . As a result , binary-codingbased quantization has become a practical approach to quantizing language models . As such , we restrict our interests to binary-coding-based quantization technique in this paper . Quantization-aware training is an active research area to improve model accuracy ( Courbariaux et al. , 2015 ; Lee et al. , 2018 ) . We note that in the case of language models , however , there are numerous occasions when retraining for quantization is not available . For example , quantizationaware training requires in-depth knowledge on model compression while model designers may not have such expertise . On the other hand , the original training code or the entire training data may not be shared with model compression engineers . Also , modifying the original DNN models to be aware of quantization would increase model design efforts and training time significantly . Since language models already demand significant training time and cost , adding additional training complexity by quantization-aware training would not be a practical option . As such , post-training quantization without retraining is gaining increasing attention ( Zhao et al. , 2019 ; Nagel et al. , 2019 ) . 3 WEIGHTED QUANTIZATION BASED ON THE BINARY CODES . As discussed , we choose post-training binary-coding-based quantization as our strategy to compress language DNN models efficiently . Following the Binary-Weight-Networks ( Rastegari et al. , 2016 ) introducing the binary codes as a quantization format , a weight vector w is approximated to be αb by using a scaling factor α ∈ R and a binary vector b ( ∈ { −1 , +1 } n ) , where n is the vector size . A real number scaling factor is shared by multiple weights such that binary vector b occupies most of the weight storage requirements . Binary codes eliminate the need for dequantization for inference , leading to reduced on-chip memory size for weights . In this section , we study general weighted quantization methodologies when quantization follows the format of the binary codes and and quantization error recognizes sensitivity information . 3.1 GREEDY METHOD AND ALTERNATING METHOD WITHOUT IMPORTANCE CONSIDERATIONS . In general , non-uniform weight quantization methods ( in the form of the binary codes ) strive to minimize ‖w − αb‖2 . In the case of 1-bit quantization , we obtain the following analytical solution : b∗ = sign ( w ) , α∗ = w > b∗ n . ( 1 ) On the other hand , in the case of multi-bit quantization , there is no analytical solution ( Rastegari et al. , 2016 ; Xu et al. , 2018 ) . As a result , various approximated methods exist for multi-bit quantization . Greedy Method As a computationally simple method , 1-bit quantization shown in Eq . ( 1 ) can be extended to multi-bit ( q-bit ) quantization ( Guo et al. , 2017 ) . Specifically , ith-bit ( i > 1 ) quantization is performed by minimizing the residue of ( i− 1 ) th-bit quantization as following : min αi , bi ‖ri−1 − αibi‖2 , where ri−1 = w − i−1∑ j=1 αjbj , 1 < i ≤ q . ( 2 ) The optimal solution of Eq . ( 2 ) is then given as b∗i = sign ( ri−1 ) , α ∗ i = r > i−1b ∗ i n . ( 3 ) Alternating Method Greedy method described above is non-iterative . In order to reduce ‖w −∑q i=1 αibi‖2 further than Greedy method , iterative methods would be necessary while increasing the number of iterations tends to lower quantization error . Once initial α and b values are calculated by Greedy method , one can notice that { αi } qi=1 can be refined ( Guo et al. , 2017 ) as [ α1 , ... , αq ] = ( ( B > q Bq ) −1 B > q w ) > , when Bq = [ b1 , ... , bq ] ∈ { −1 , +1 } n×q . ( 4 ) Then , Bq can be refined as well by binary search given a new refined { αi } qi=1 . As a result , { αi } q i=1 and Bq are refined alternatively . Alternating refinements of { αi } qi=1 and Bk are repeated until there is no noticeable improvement in quantization error . Such iterative quantization procedure is introduced as Alternating multi-bit method ( Xu et al. , 2018 ) . 3.2 IMPORTANCE-AWARE WEIGHTED QUANTIZATION . Let us assume that the importance of the i-th parameter is normalized and given as mi ( 0 ≤ mi ≤ 1 ) . Then , we minimize the weighted quantization loss ∑n i=1 ( mi ( wi− ŵi ) 2 ) where wi is quantized to be ŵi = ∑q j=1 ( αjbj ) . Before studying how to estimate importance values , we are interested in finding modified versions of Greedy method and Alternating method when importance values are given . For 1-bit quantization , weighted quantization also has the following analytical solution : b∗ = sign ( w ) , α∗ = ∑n i=1 ( mi|wi| ) ∑n i=1mi . ( 5 ) Note that if all importance values are equal ( e.g. , mi = 1 for all i ) , then Eq . ( 5 ) becomes the same as Eq . ( 1 ) . Correspondingly , Eq . ( 1 ) can be regarded as a special case of Eq . ( 5 ) . Compared to the conventional Greedy method , our proposed importance-aware Greedy method demands modifications to αi calculations as α∗i = ∑n i=1 ( mi|ri−1| ) / ∑n i=1mi . For the importance-aware Alternating method , we first conduct the importance-aware Greedy method . Then , Eq . ( 4 ) is to be transformed to employ importance . Let us define an n-by-n diagonal matrix M = diag ( m1 , ... , mn ) , where each diagonal element is an importance value mi . By solving linear least squares , α values are refined as [ α1 , ... , αq ] = ( ( B > q MBq ) −1 B > q Mw ) > , when Bq = [ b1 , ... , bq ] ∈ { −1 , +1 } n×q , ( 6 ) while refining Bq is still performed by binary search using refined scaling factors . Accordingly , Eq . ( 4 ) is a particular case of Eq . ( 6 ) when M is an identity matrix . Overall , our proposed importance-aware quantization scheme is comprehensive to include previous methods as a subset . In the rest of this paper , we investigate simple and efficient schemes to estimate importance metrics applicable to post-training non-uniform binary-coding-based quantization .
Based on a previous classic binary coding scheme, this paper proposed to introduce a modification $m_i$ on the binarization scaling factor $\alpha$, by considering the weight magnitude. It further use 3 hyperparamters to refine $m_i$ by constraining its upper/lower bound and exponent. Besides, this work spent lengthy content to describe how to determine the hyperparamters.
SP:fad2af574548c00ab1e950a118a2e0d206663b94
Post-Training Weighted Quantization of Neural Networks for Language Models
1 INTRODUCTION . Training techniques for deep neural networks ( DNNs ) have been developed in ways to incur a lot of parameter redundancy to expedite seeking local minima ( Denil et al. , 2013 ; Jonathan Frankle , 2019 ) . As a result , various model compression techniques including parameter pruning ( Han et al. , 2015 ; He et al. , 2017 ) , quantization ( Courbariaux et al. , 2015 ; Rastegari et al. , 2016 ) , low-rank approximation ( N. Sainath et al. , 2013 ; Prabhavalkar et al. , 2016 ) , and knowledge distillation ( Hinton et al. , 2015 ; Polino et al. , 2018 ) are proposed to lower storage requirements and improve inference performance . Several compression techniques can be combined in a synergistic way to enhance compression ratio ( Han et al. , 2016 ; Zhu et al. , 2017 ) . In this work , we consider parameter quantization that maintains structured model formats and presents high compression ratio . Note that due to limited hardware resources , quantization is an essential method for any inference systems . In general , quantization is classified into uniform quantization based on fixed-point parameter representations ( Jacob et al. , 2018 ; Han et al. , 2016 ) and non-uniform quantization associated with the binary codes ( Zhou et al. , 2017 ; Rastegari et al. , 2016 ) or codebooks ( Choi et al. , 2017 ; Stock et al. , 2020 ) . Most DNN quantization methods are performed based on the principle of minimizing the mean squared error ( MSE ) of quantized parameters ( Rastegari et al. , 2016 ; Xu et al. , 2018 ; Zhou et al. , 2017 ) . Optimizing the MSE is also an underlying principle of low-rank approximation techniques such as the singular value decomposition ( SVD ) ( Prabhavalkar et al. , 2016 ; N. Sainath et al. , 2013 ) . Note that , however , minimizing the MSE implies that each parameter is equally important ( i.e. , squared errors from parameters are accumulated without considering importance of each weight ) . In practice , the impact of each parameter perturbation from quantization on training loss can be vastly different and such impact needs to be analyzed through a sensitivity study of each parameter toward a change in training loss value . In other words , minimizing the MSE ( or the Euclidean distance between original parameters and quantized parameters ) may not correspond to minimizing training loss function after quantization . Robustness to quantization error of each parameter can be expressed as sensitivity . Sensitivity of i-th parameter wi is the amount of change in the loss function when wi is perturbed . A parameter associated with high sensitivity would require relatively smaller quantization error if quantization is performed in a group manner . Several previous works acknowledge distinct sensitivity of each parameter to improve quantization quality . Note that because exact sensitivity estimation of each parameter toward loss function is highly complicated , various heuristic techniques have been introduced . For example , Hessian-weighted k-means clustering is used for codebook-based implementations ( Choi et al. , 2017 ) or Taylor series expansion to bound loss function difference is conducted to decide the optimal quantization bits of each weight ( Khoram & Li , 2018 ) . The Hessian matrix can be used to assign different numbers of quantization bits for each layer ( Dong et al. , 2019 ; Shen et al. , 2019 ) . Minimizing the reconstruction error on the output activations after each layer quantization is performed in ( Stock et al. , 2020 ) . In this paper , we propose a weighted quantization framework where quantized parameters follow the structure of the binary codes so as to achieve high compression ratio and high computational efficiency ( Rastegari et al. , 2016 ; Jeon et al. , 2020 ) . Specifically , given that an importance of each parameter is represented as a real number between 0 and 1 , we extract an optimal quantization solution modified from the previous binary-coding-based quantization methods that employ equal parameter importance . Similar to previous attempts , we also find that calculating accurate importance of each parameter is challenging . As a successful approximation of importance , we suggest that magnitude-based importance estimation is especially effective for post-training non-uniform quantization . 2 POST-TRAINING PARAMETER QUANTIZATION FOR LANGUAGE MODELS . The number of parameters for language models is dramatically increasing ( e.g. , GPT-3 ( Brown et al. , 2020 ) requires 175 billion parameters ) . Correspondingly , model compression for language models is becoming a mandatory process to reduce response time and inference energy . We devise a compression method considering the followings : • Recent language models are usually memory-bound because of small batch size and lacking layers of high reuse ( e.g. , conv layers ) . Thus , reducing memory footprint is critical . • Compression algorithms should be supported by dedicated kernels , designed specifically for language models if possible . • Compression-aware training is challenging and expensive if hyper-parameters are added to already huge language models ( hence , we choose a post-training method . ) Fixed-point inference using uniform quantization is not desirable for language models because of noticeable accuracy degradation ( Shen et al. , 2019 ; Jeon et al. , 2020 ) while the advantage of small computational units ( e.g. , INT8 MAC ) is insignificant for memory-bound applications . Thus , we adopt float-based parameter quantization ( i.e. , expected values of quantized parameters remains to be of full precision ) that induce a lot smaller number of quantization bits compared to fixed-point quantization ( Xu et al. , 2018 ; Stock et al. , 2020 ) . Recently , a kernel library , called BiQGEMM ( Jeon et al. , 2020 ) , was introduced to support binarycoding-based quantization techniques to accelerate quantized neural networks . Using lookup tables , BiQGEMM enables byte-level memory accesses and achieves 8.3× run-time memory footprints and 3.5× speed up with a mobile CPU for Transformer ( Chung et al. , 2020 ) . As a result , binary-codingbased quantization has become a practical approach to quantizing language models . As such , we restrict our interests to binary-coding-based quantization technique in this paper . Quantization-aware training is an active research area to improve model accuracy ( Courbariaux et al. , 2015 ; Lee et al. , 2018 ) . We note that in the case of language models , however , there are numerous occasions when retraining for quantization is not available . For example , quantizationaware training requires in-depth knowledge on model compression while model designers may not have such expertise . On the other hand , the original training code or the entire training data may not be shared with model compression engineers . Also , modifying the original DNN models to be aware of quantization would increase model design efforts and training time significantly . Since language models already demand significant training time and cost , adding additional training complexity by quantization-aware training would not be a practical option . As such , post-training quantization without retraining is gaining increasing attention ( Zhao et al. , 2019 ; Nagel et al. , 2019 ) . 3 WEIGHTED QUANTIZATION BASED ON THE BINARY CODES . As discussed , we choose post-training binary-coding-based quantization as our strategy to compress language DNN models efficiently . Following the Binary-Weight-Networks ( Rastegari et al. , 2016 ) introducing the binary codes as a quantization format , a weight vector w is approximated to be αb by using a scaling factor α ∈ R and a binary vector b ( ∈ { −1 , +1 } n ) , where n is the vector size . A real number scaling factor is shared by multiple weights such that binary vector b occupies most of the weight storage requirements . Binary codes eliminate the need for dequantization for inference , leading to reduced on-chip memory size for weights . In this section , we study general weighted quantization methodologies when quantization follows the format of the binary codes and and quantization error recognizes sensitivity information . 3.1 GREEDY METHOD AND ALTERNATING METHOD WITHOUT IMPORTANCE CONSIDERATIONS . In general , non-uniform weight quantization methods ( in the form of the binary codes ) strive to minimize ‖w − αb‖2 . In the case of 1-bit quantization , we obtain the following analytical solution : b∗ = sign ( w ) , α∗ = w > b∗ n . ( 1 ) On the other hand , in the case of multi-bit quantization , there is no analytical solution ( Rastegari et al. , 2016 ; Xu et al. , 2018 ) . As a result , various approximated methods exist for multi-bit quantization . Greedy Method As a computationally simple method , 1-bit quantization shown in Eq . ( 1 ) can be extended to multi-bit ( q-bit ) quantization ( Guo et al. , 2017 ) . Specifically , ith-bit ( i > 1 ) quantization is performed by minimizing the residue of ( i− 1 ) th-bit quantization as following : min αi , bi ‖ri−1 − αibi‖2 , where ri−1 = w − i−1∑ j=1 αjbj , 1 < i ≤ q . ( 2 ) The optimal solution of Eq . ( 2 ) is then given as b∗i = sign ( ri−1 ) , α ∗ i = r > i−1b ∗ i n . ( 3 ) Alternating Method Greedy method described above is non-iterative . In order to reduce ‖w −∑q i=1 αibi‖2 further than Greedy method , iterative methods would be necessary while increasing the number of iterations tends to lower quantization error . Once initial α and b values are calculated by Greedy method , one can notice that { αi } qi=1 can be refined ( Guo et al. , 2017 ) as [ α1 , ... , αq ] = ( ( B > q Bq ) −1 B > q w ) > , when Bq = [ b1 , ... , bq ] ∈ { −1 , +1 } n×q . ( 4 ) Then , Bq can be refined as well by binary search given a new refined { αi } qi=1 . As a result , { αi } q i=1 and Bq are refined alternatively . Alternating refinements of { αi } qi=1 and Bk are repeated until there is no noticeable improvement in quantization error . Such iterative quantization procedure is introduced as Alternating multi-bit method ( Xu et al. , 2018 ) . 3.2 IMPORTANCE-AWARE WEIGHTED QUANTIZATION . Let us assume that the importance of the i-th parameter is normalized and given as mi ( 0 ≤ mi ≤ 1 ) . Then , we minimize the weighted quantization loss ∑n i=1 ( mi ( wi− ŵi ) 2 ) where wi is quantized to be ŵi = ∑q j=1 ( αjbj ) . Before studying how to estimate importance values , we are interested in finding modified versions of Greedy method and Alternating method when importance values are given . For 1-bit quantization , weighted quantization also has the following analytical solution : b∗ = sign ( w ) , α∗ = ∑n i=1 ( mi|wi| ) ∑n i=1mi . ( 5 ) Note that if all importance values are equal ( e.g. , mi = 1 for all i ) , then Eq . ( 5 ) becomes the same as Eq . ( 1 ) . Correspondingly , Eq . ( 1 ) can be regarded as a special case of Eq . ( 5 ) . Compared to the conventional Greedy method , our proposed importance-aware Greedy method demands modifications to αi calculations as α∗i = ∑n i=1 ( mi|ri−1| ) / ∑n i=1mi . For the importance-aware Alternating method , we first conduct the importance-aware Greedy method . Then , Eq . ( 4 ) is to be transformed to employ importance . Let us define an n-by-n diagonal matrix M = diag ( m1 , ... , mn ) , where each diagonal element is an importance value mi . By solving linear least squares , α values are refined as [ α1 , ... , αq ] = ( ( B > q MBq ) −1 B > q Mw ) > , when Bq = [ b1 , ... , bq ] ∈ { −1 , +1 } n×q , ( 6 ) while refining Bq is still performed by binary search using refined scaling factors . Accordingly , Eq . ( 4 ) is a particular case of Eq . ( 6 ) when M is an identity matrix . Overall , our proposed importance-aware quantization scheme is comprehensive to include previous methods as a subset . In the rest of this paper , we investigate simple and efficient schemes to estimate importance metrics applicable to post-training non-uniform binary-coding-based quantization .
The paper employs the binary-coding-based post-training quantization (without retraining) for language modeling. The key contribution is that weight importance is considered while determining binary code (a, B). Two methods, Greedy and Alternating, are also modified to use the importance. The algorithm uses a novel normalized importance, which directly uses weight magnitude and some hyper-parameters. Because the performance is sensitive to hyperparameters, Bayesian optimization is used to find task- and model-specific settings.
SP:fad2af574548c00ab1e950a118a2e0d206663b94
WrapNet: Neural Net Inference with Ultra-Low-Precision Arithmetic
1 INTRODUCTION . Significant progress has been made in quantizing ( or even binarizing ) neural networks , and numerous methods have been proposed that reduce the precision of weights , activations , and even gradients while retaining high accuracy ( Courbariaux et al. , 2016 ; Hubara et al. , 2016 ; Li et al. , 2016 ; Lin et al. , 2017 ; Rastegari et al. , 2016 ; Zhu et al. , 2016 ; Dong et al. , 2017 ; Zhu et al. , 2018 ; Choi et al. , 2018a ; Zhou et al. , 2016 ; Li et al. , 2017 ; Wang et al. , 2019 ; Jung et al. , 2019 ; Choi et al. , 2018b ; Gong et al. , 2019 ) . Such quantization strategies make neural networks more hardware-friendly by leveraging fast , integer-only arithmetic , replacing multiplications with simple bit-wise operations , and reducing memory requirements and bandwidth . Unfortunately , the gains from quantization are limited because quantized networks still require high-precision arithmetic . Even if weights and activations are represented with just one bit , deep feature computation requires the summation of hundreds or even thousands of products . Performing these summations with low-precision registers results in integer overflow , contaminating downstream computations and destroying accuracy . Moreover , as multiplication costs are slashed by quantization , high-precision accumulation starts to dominate the arithmetic cost . Indeed , our own hardware implementations show that an 8-bit× 8-bit multiplier consumes comparable power and silicon area to a 32-bit accumulator . When reducing the precision to a 3-bit× 1-bit multiplier , a 32-bit accumulator consumes more than 10× higher power and area ; see Section 4.5 . Evidently , low-precision accumulators are the key to further accelerating quantized nets . In custom hardware , low-precision accumulators reduce area and power requirements while boosting throughput . On general-purpose processors , where registers have fixed size , low-precision accumulators are exploited through bit-packing , i.e. , by representing multiple low-precision integers side-by-side within a single high-precision register ( Pedersoli et al. , 2018 ; Rastegari et al. , 2016 ; Bulat & Tzimiropoulos , 2019 ) . Then , a single vector instruction is used to perform the same operation across all of the packed numbers . For example , a 64-bit register can be used to execute eight parallel 8-bit additions , thus increasing the throughput of software implementations . Hence , the use of low-precision accumulators is advantageous for both hardware and software implementations , provided that integer overflow does not contaminate results . We propose WrapNet , a network architecture with extremely low-precision accumulators . WrapNet exploits the fact that integer computer arithmetic is cyclic , i.e , numbers are accumulated until they reach the maximum representable integer and then “ wrap around ” to the smallest representable integer . To deal with such integer overflows , we place a differentiable cyclic ( periodic ) activation function immediately after the convolution ( or linear ) operation , with period equal to the difference between the maximum and minimum representable integer . This strategy makes neural networks resilient to overflow as the activations of neurons are unaffected by overflows during convolution . We explore several directions with WrapNet . On the software side , we consider the use of bitpacking for processors with or without dedicated vector instructions . In the absence of vector instructions , overflows in one packed integer may produce a carry bit that contaminates its neighboring value . We propose training regularizers that minimize the effects of such contamination artifacts , resulting in networks that leverage bit-packed computation with very little impact on final accuracy . For processors with vector instructions , we modify the Gemmlowp library ( Jacob et al. , 2016 ) to operate with 8-bit accumulators . Our implementation achieves up to 2.4× speed-up compared to a 32-bit accumulator implementation , even when lacking specialized instructions for 8-bit multiplyaccumulate . We also demonstrate the efficacy of WrapNet in terms of cycle time , area , and energy efficiency when considering custom hardware designs in a commercial 28 nm CMOS technology . 2 RELATED WORK AND BACKGROUND . 2.1 NETWORK QUANTIZATION . Network quantization aims at accelerating inference by using low-precision arithmetic . In its most extreme form , weights and activations are both quantized using binary or ternary quantizers . The binary quantizer Qb corresponds to the sign function , whereas the ternary quantizer Qt maps some values to zero . Multiplications in binarized or ternarized networks ( Hubara et al. , 2016 ; Courbariaux et al. , 2015 ; Lin et al. , 2017 ; Rastegari et al. , 2016 ; Zhu et al. , 2016 ) can be implemented using bitwise logic , leading to impressive acceleration . However , training such networks is challenging since fewer than 2 bits are used to represent activations and weights , resulting in a dramatic impact on accuracy compared to full-precision models . Binary and ternary networks are generalized to higher precision via uniform quantization , which has been shown to result in efficient hardware ( Jacob et al. , 2018 ) . The multi-bit uniform quantizer Qu is given by : Qu ( x ) = round ( x/∆x ) ∆x , where ∆x denotes the quantization step-size . The output of the quantizer is a floating-point number x that can be expressed as x = ∆xxq , where xq is the fixed-point representation of x . The fixed-point number xq has a “ precision ” or “ bitwidth , ” which is the number of bits used to represent it . Note that the range of floating-point numbers representable by the uniform quantizer Qu depends on both the quantization step-size ∆x and the quantization precision . Nonetheless , the number of different values that can be represented by the same quantizer depends only on the precision . Applying uniform quantization to both weights w = ∆wwq and activations x = ∆xxq simplifies computations , as an inner-product simply becomes z = ∑ i wixi = ∑ i ( ∆w ( wq ) i ) ( ∆x ( xq ) i ) = ( ∆w∆x ) ∑ i ( wq ) i ( xq ) i = ∆zzq . ( 1 ) The key advantage of uniform quantization is that the core computation ∑ i ( wq ) i ( xq ) i can be carried out using fixed-point ( i.e. , integer ) arithmetic only . Results in ( Gong et al. , 2019 ; Choi et al. , 2018b ; Jung et al. , 2019 ; Wang et al. , 2019 ; Mishra et al. , 2017 ; Mishra & Marr , 2017 ) have shown that high classification accuracy is attainable with low-bitwidth uniform quantization , such as 2 or 3 bits . Although ( wq ) i , ( xq ) i , and their product may have extremely low-precision , the accumulated result zq of many of these products has very high dynamic range . As a result , high-precision accumulators are typically required to avoid overflows , which is the bottleneck for further arithmetic speedups . 2.2 LOW-PRECISION ACCUMULATION . Several approaches have been proposed that use accumulators with fewer bits to obtain speed-ups . For example , reference ( Khudia et al. , 2021 ) splits the weights into two separate matrices , one with small- and another with large-magnitude entries . If the latter matrix is sparse , acceleration is attained as most computations rely on fast , low-precision operations . However , to significantly reduce the accumulator ’ s precision , one would need to severely decrease the magnitude of the entries of the first matrix , which would , in turn , prevent the second matrix from being sufficiently sparse to achieve acceleration . Recently , ( de Bruin et al. , 2020 ) proposed using layer-dependent quantization parameters to avoid overflowing accumulators with fixed precision . Fine-tuning is then used to improve performance . However , if the accumulator precision is too low ( e.g. , 8 bits or less ) , the optimized precision of activations and weights is too coarse to attain satisfactory performance . Another line of work ( Sakr et al. , 2019 ; Micikevicius et al. , 2017 ; Wang et al. , 2018 ) uses 16-bit floating-point accumulators for training and inference—such approaches typically require higher complexity than methods based on fixed-point arithmetic . 2.3 THE IMPACT OF INTEGER OVERFLOW . Overflow is a major problem , especially in highly quantized networks . Table 1 demonstrates that overflows occur in around 11 % of the neurons in a network with 3-bit activations ( A ) and binary weights ( W ) that is using 8-bit accumulators for inference after being trained on CIFAR-10 with standard precision . Clearly , overflow has a significant negative impact on accuracy . Table 1 shows that if we use an 8-bit ( instead of a 32-bit ) accumulator , then the accuracy of a binary-weight network with 2-bit activations drops by more than 40 % , even when only 1.72 % neurons overflow . If we repeat the experiment with 3-bit activations and binary weights , the accuracy is only marginally better than a random guess . Therefore , existing methods try to avoid integer overflow by using accumulators with relatively high precision , and pay a correspondingly high price when doing arithmetic . 3 WRAPNET : DEALING WITH INTEGER OVERFLOWS . We now introduce WrapNet , which includes a cyclic activation function and an overflow penalty , enabling neural networks to use low-precision accumulators . We also present a modified quantization step-size selection strategy for activations , which retains high classification accuracy . Finally , we show how further speed-ups can be achieved on processors with or without specialized vector instructions . We propose training a network with layers that emulate integer overflows on the fixed-point preactivations zq to maintain high accuracy . However , directly training a quantized network with an overflowing accumulator diverges ( see Table 2 ) due to the discontinuity of the modulo operation . To facilitate training , we insert a cyclic “ smooth modulo ” activation immediately after every linear/convolutional layer , which not only captures the wrap-around behavior of overflows , but also ensures that the activation is continuous everywhere . The proposed smooth modulo activation c is a composite function of a modulo function m and a basis function f that ensures continuity . Specifically , given a b-bit accumulator , our smooth-modulo c for fixed-point inputs is as follows : f ( m ) = m , for − kk+12 b−1 ≤ m ≤ kk+12 b−1 −k2b−1 − km , for m < − kk+12 b−1 k2b−1 − km , for m > kk+12 b−1 c ( zq ) = f ( mod ( zq + 2b−1 , 2b ) − 2b−1 ) , where k is a hyper-parameter that controls the slope of the transition . Note that we apply constant shifts to keep the input of f in [ −2b−1 , 2b−1 ) . Figure 1a illustrates the smooth modulo function with two different slopes k = 1 , 4 . As k increases , the cyclic activation becomes more similar to the modulo operator and has a greater range , but the transition becomes more abrupt . Since our cyclic activation is continuous and differentiable almost everywhere , standard gradient-based learning can be applied easily . A convolutional block with cyclic activation layer is shown in Figure 1b . After the convolution result goes into the cyclic activation , the result is multiplied by ∆z to compute a floating-point number , which is then processed through BatchNorm and ReLU . A fixed per-layer quantization step-size is then used to convert the floating-point output of the ReLU into a fixed-point input for the next layer . We detail the procedure to find this step-size in Section 3.2 .
This paper explores to solve an often ignored issue in quantization: accumulation precision. As the bit-width of input scales down, the area/energy cost of the accumulator starts to dominate. The cyclic method proposed by the authors at the first glance is not intuitive. However, it's surprising that the surveyed models could be tuned to live with significant overflows---as long as it can be tuned, which is enabled by the "differentiable overflow" brought by the cyclic method. There are several issues to be addressed before the paper can be accepted:
SP:1415d403cc5b85e50a37458d786bf31d01045f60
WrapNet: Neural Net Inference with Ultra-Low-Precision Arithmetic
1 INTRODUCTION . Significant progress has been made in quantizing ( or even binarizing ) neural networks , and numerous methods have been proposed that reduce the precision of weights , activations , and even gradients while retaining high accuracy ( Courbariaux et al. , 2016 ; Hubara et al. , 2016 ; Li et al. , 2016 ; Lin et al. , 2017 ; Rastegari et al. , 2016 ; Zhu et al. , 2016 ; Dong et al. , 2017 ; Zhu et al. , 2018 ; Choi et al. , 2018a ; Zhou et al. , 2016 ; Li et al. , 2017 ; Wang et al. , 2019 ; Jung et al. , 2019 ; Choi et al. , 2018b ; Gong et al. , 2019 ) . Such quantization strategies make neural networks more hardware-friendly by leveraging fast , integer-only arithmetic , replacing multiplications with simple bit-wise operations , and reducing memory requirements and bandwidth . Unfortunately , the gains from quantization are limited because quantized networks still require high-precision arithmetic . Even if weights and activations are represented with just one bit , deep feature computation requires the summation of hundreds or even thousands of products . Performing these summations with low-precision registers results in integer overflow , contaminating downstream computations and destroying accuracy . Moreover , as multiplication costs are slashed by quantization , high-precision accumulation starts to dominate the arithmetic cost . Indeed , our own hardware implementations show that an 8-bit× 8-bit multiplier consumes comparable power and silicon area to a 32-bit accumulator . When reducing the precision to a 3-bit× 1-bit multiplier , a 32-bit accumulator consumes more than 10× higher power and area ; see Section 4.5 . Evidently , low-precision accumulators are the key to further accelerating quantized nets . In custom hardware , low-precision accumulators reduce area and power requirements while boosting throughput . On general-purpose processors , where registers have fixed size , low-precision accumulators are exploited through bit-packing , i.e. , by representing multiple low-precision integers side-by-side within a single high-precision register ( Pedersoli et al. , 2018 ; Rastegari et al. , 2016 ; Bulat & Tzimiropoulos , 2019 ) . Then , a single vector instruction is used to perform the same operation across all of the packed numbers . For example , a 64-bit register can be used to execute eight parallel 8-bit additions , thus increasing the throughput of software implementations . Hence , the use of low-precision accumulators is advantageous for both hardware and software implementations , provided that integer overflow does not contaminate results . We propose WrapNet , a network architecture with extremely low-precision accumulators . WrapNet exploits the fact that integer computer arithmetic is cyclic , i.e , numbers are accumulated until they reach the maximum representable integer and then “ wrap around ” to the smallest representable integer . To deal with such integer overflows , we place a differentiable cyclic ( periodic ) activation function immediately after the convolution ( or linear ) operation , with period equal to the difference between the maximum and minimum representable integer . This strategy makes neural networks resilient to overflow as the activations of neurons are unaffected by overflows during convolution . We explore several directions with WrapNet . On the software side , we consider the use of bitpacking for processors with or without dedicated vector instructions . In the absence of vector instructions , overflows in one packed integer may produce a carry bit that contaminates its neighboring value . We propose training regularizers that minimize the effects of such contamination artifacts , resulting in networks that leverage bit-packed computation with very little impact on final accuracy . For processors with vector instructions , we modify the Gemmlowp library ( Jacob et al. , 2016 ) to operate with 8-bit accumulators . Our implementation achieves up to 2.4× speed-up compared to a 32-bit accumulator implementation , even when lacking specialized instructions for 8-bit multiplyaccumulate . We also demonstrate the efficacy of WrapNet in terms of cycle time , area , and energy efficiency when considering custom hardware designs in a commercial 28 nm CMOS technology . 2 RELATED WORK AND BACKGROUND . 2.1 NETWORK QUANTIZATION . Network quantization aims at accelerating inference by using low-precision arithmetic . In its most extreme form , weights and activations are both quantized using binary or ternary quantizers . The binary quantizer Qb corresponds to the sign function , whereas the ternary quantizer Qt maps some values to zero . Multiplications in binarized or ternarized networks ( Hubara et al. , 2016 ; Courbariaux et al. , 2015 ; Lin et al. , 2017 ; Rastegari et al. , 2016 ; Zhu et al. , 2016 ) can be implemented using bitwise logic , leading to impressive acceleration . However , training such networks is challenging since fewer than 2 bits are used to represent activations and weights , resulting in a dramatic impact on accuracy compared to full-precision models . Binary and ternary networks are generalized to higher precision via uniform quantization , which has been shown to result in efficient hardware ( Jacob et al. , 2018 ) . The multi-bit uniform quantizer Qu is given by : Qu ( x ) = round ( x/∆x ) ∆x , where ∆x denotes the quantization step-size . The output of the quantizer is a floating-point number x that can be expressed as x = ∆xxq , where xq is the fixed-point representation of x . The fixed-point number xq has a “ precision ” or “ bitwidth , ” which is the number of bits used to represent it . Note that the range of floating-point numbers representable by the uniform quantizer Qu depends on both the quantization step-size ∆x and the quantization precision . Nonetheless , the number of different values that can be represented by the same quantizer depends only on the precision . Applying uniform quantization to both weights w = ∆wwq and activations x = ∆xxq simplifies computations , as an inner-product simply becomes z = ∑ i wixi = ∑ i ( ∆w ( wq ) i ) ( ∆x ( xq ) i ) = ( ∆w∆x ) ∑ i ( wq ) i ( xq ) i = ∆zzq . ( 1 ) The key advantage of uniform quantization is that the core computation ∑ i ( wq ) i ( xq ) i can be carried out using fixed-point ( i.e. , integer ) arithmetic only . Results in ( Gong et al. , 2019 ; Choi et al. , 2018b ; Jung et al. , 2019 ; Wang et al. , 2019 ; Mishra et al. , 2017 ; Mishra & Marr , 2017 ) have shown that high classification accuracy is attainable with low-bitwidth uniform quantization , such as 2 or 3 bits . Although ( wq ) i , ( xq ) i , and their product may have extremely low-precision , the accumulated result zq of many of these products has very high dynamic range . As a result , high-precision accumulators are typically required to avoid overflows , which is the bottleneck for further arithmetic speedups . 2.2 LOW-PRECISION ACCUMULATION . Several approaches have been proposed that use accumulators with fewer bits to obtain speed-ups . For example , reference ( Khudia et al. , 2021 ) splits the weights into two separate matrices , one with small- and another with large-magnitude entries . If the latter matrix is sparse , acceleration is attained as most computations rely on fast , low-precision operations . However , to significantly reduce the accumulator ’ s precision , one would need to severely decrease the magnitude of the entries of the first matrix , which would , in turn , prevent the second matrix from being sufficiently sparse to achieve acceleration . Recently , ( de Bruin et al. , 2020 ) proposed using layer-dependent quantization parameters to avoid overflowing accumulators with fixed precision . Fine-tuning is then used to improve performance . However , if the accumulator precision is too low ( e.g. , 8 bits or less ) , the optimized precision of activations and weights is too coarse to attain satisfactory performance . Another line of work ( Sakr et al. , 2019 ; Micikevicius et al. , 2017 ; Wang et al. , 2018 ) uses 16-bit floating-point accumulators for training and inference—such approaches typically require higher complexity than methods based on fixed-point arithmetic . 2.3 THE IMPACT OF INTEGER OVERFLOW . Overflow is a major problem , especially in highly quantized networks . Table 1 demonstrates that overflows occur in around 11 % of the neurons in a network with 3-bit activations ( A ) and binary weights ( W ) that is using 8-bit accumulators for inference after being trained on CIFAR-10 with standard precision . Clearly , overflow has a significant negative impact on accuracy . Table 1 shows that if we use an 8-bit ( instead of a 32-bit ) accumulator , then the accuracy of a binary-weight network with 2-bit activations drops by more than 40 % , even when only 1.72 % neurons overflow . If we repeat the experiment with 3-bit activations and binary weights , the accuracy is only marginally better than a random guess . Therefore , existing methods try to avoid integer overflow by using accumulators with relatively high precision , and pay a correspondingly high price when doing arithmetic . 3 WRAPNET : DEALING WITH INTEGER OVERFLOWS . We now introduce WrapNet , which includes a cyclic activation function and an overflow penalty , enabling neural networks to use low-precision accumulators . We also present a modified quantization step-size selection strategy for activations , which retains high classification accuracy . Finally , we show how further speed-ups can be achieved on processors with or without specialized vector instructions . We propose training a network with layers that emulate integer overflows on the fixed-point preactivations zq to maintain high accuracy . However , directly training a quantized network with an overflowing accumulator diverges ( see Table 2 ) due to the discontinuity of the modulo operation . To facilitate training , we insert a cyclic “ smooth modulo ” activation immediately after every linear/convolutional layer , which not only captures the wrap-around behavior of overflows , but also ensures that the activation is continuous everywhere . The proposed smooth modulo activation c is a composite function of a modulo function m and a basis function f that ensures continuity . Specifically , given a b-bit accumulator , our smooth-modulo c for fixed-point inputs is as follows : f ( m ) = m , for − kk+12 b−1 ≤ m ≤ kk+12 b−1 −k2b−1 − km , for m < − kk+12 b−1 k2b−1 − km , for m > kk+12 b−1 c ( zq ) = f ( mod ( zq + 2b−1 , 2b ) − 2b−1 ) , where k is a hyper-parameter that controls the slope of the transition . Note that we apply constant shifts to keep the input of f in [ −2b−1 , 2b−1 ) . Figure 1a illustrates the smooth modulo function with two different slopes k = 1 , 4 . As k increases , the cyclic activation becomes more similar to the modulo operator and has a greater range , but the transition becomes more abrupt . Since our cyclic activation is continuous and differentiable almost everywhere , standard gradient-based learning can be applied easily . A convolutional block with cyclic activation layer is shown in Figure 1b . After the convolution result goes into the cyclic activation , the result is multiplied by ∆z to compute a floating-point number , which is then processed through BatchNorm and ReLU . A fixed per-layer quantization step-size is then used to convert the floating-point output of the ReLU into a fixed-point input for the next layer . We detail the procedure to find this step-size in Section 3.2 .
The paper introduces a new neural network layer that enables training NNs with quantized activations using reduced bit-width accumulators. The cyclic activation layer makes overflows smooth, instead of being discontinuous and this enables achieving better accuracy on quantized networks on reduced bit-width accumulators. They also introduce overflow and carry penalties to dissuade the training regime from reaching overflow states.
SP:1415d403cc5b85e50a37458d786bf31d01045f60
Smooth Activations and Reproducibility in Deep Networks
1 INTRODUCTION . Recent developments in deep learning leave no question about the advantages of deep networks over classical methods , which relied heavily on linear convex optimization solutions . With their astonishing unprecedented success , deep models are providing solutions to a continuously increasing number of domains in our lives . These solutions , however , while much more accurate than their convex counterparts , are usually irreproducible in the predictions they provide . While average accuracy of deep models on some validation dataset is usually much higher than that of linear convex models , predictions on individual examples of two models , that were trained to be identical , may diverge substantially , exhibiting Prediction Differences that may be as high as non-negligible fractions of the actual predictions ( see , e.g. , Chen et al . ( 2020 ) ; Dusenberry et al . ( 2020 ) ) . Deep networks express ( only ) what they learned . Like humans , they may establish different beliefs as function of the order in which they had seen training data ( Achille et al. , 2017 ; Bengio et al. , 2009 ) . Due to the huge amounts of data required to train such models , enforcing determinism ( Nagarajan et al. , 2018 ) may not be an option . Deep networks may be trained on highly distributed , parallelized systems . Thus two supposedly identical models , with the same architecture , parameters , training algorithm and training hyper-parameters that are trained on the same training dataset , even if they are initialized identically , will exhibit some randomness in the order in which they see the training set and apply updates . Due to the highly non-convex objective , such models may converge to different optima , which may exhibit equal average objective , but can provide very different predictions to individual examples . Irreproducibility in deep models is not the classical type of epistemic uncertainty , widely studied in the literature , nor is it overfitting . It differs from these phenomena in several ways : It does not diminish with more training examples like classical epistemic uncertainty , and it does not cause degradation to test accuracy by overfitting unseen data to the training examples . While irreproducibilty may be acceptable for some applications , it can be very detrimental in applications , such as medical ones , where two different diagnoses to the same symptoms may be unacceptable . Furthermore , in online and/or re-enforcement systems , which rely on their predictions to determine actions , that in turn , determine the remaining training examples , even small initial irreproducibility can cause large divergence of models that are supposed to be identical . One example is sponsored advertisement online Click-Through-Rate ( CTR ) prediction ( McMahan et al. , 2013 ) . The effect of irreproducibility in CTR prediction can go far beyond changing the predicted CTR of an example , as it may affect actions that take place downstream in a complex system . Reproducibility is a problem even if one trains only a single model , as it may be impossible to determine whether the trained model provides acceptable solutions for applications that can not tolerate unacceptable ones . A major factor to the unprecedented success of deep networks in recent years has been the Rectified Linear Unit ( ReLU ) activation , ( Nair & Hinton , 2010 ) . ReLU nonlinearity together with back-propagation give simple updates , accompanied with superior accuracy . ReLU thus became the undisputed activation used in deep learning . However , is ReLU really the best to use ? While it gives better optima than those achieved with simple convex models , it imposes an extremely nonconvex objective surface with many such optima . The direction of a gradient update with a gradient based optimizer is determined by the specific example which generates the update . Thus the order of seeing examples or applying updates can determine which optimum is reached . Many such optima , as imposed by ReLU , thus provide a recipe for irreproducibility . In recent years , different works started challenging the dominance of ReLU , exploring alternatives . Overviews of various activations were reported in Nwankpa et al . ( 2018 ) ; Pedamonti ( 2018 ) . Variations on ReLU were studied in Jin et al . ( 2015 ) . Activations like SoftPlus ( Zheng et al. , 2015 ) , Exponential Linear Unit ( ELU ) ( Clevert et al. , 2015 ) , Scaled Exponential Linear Unit ( SELU ) ( Klambauer et al. , 2017 ; Sakketou & Ampazis , 2019 ; Wang et al. , 2017 ) , or Continuously differentiable Exponential Linear Unit ( CELU ) ( Barron , 2017 ) were proposed , as well as the Gaussian Error Linear Unit ( GELU ) ( Hendrycks & Gimpel , 2016 ) . Specifically , the Swish activation ( Ramachandran et al. , 2017 ) ( that can approximate GELU ) was found through automated search to achieve superior accuracy to ReLU . Further activations with similarity to GELU were proposed recently ; Mish ( Misra , 2019 ) , and TanhExp ( Liu & Di , 2020 ) . Unlike ReLU , many of these activations are smooth with continuous gradient . Good properties of smooth activations were studied as early as Mhaskar ( 1997 ) ( see also Du ( 2019 ) ; Lokhande et al . ( 2020 ) ) . These series of papers started suggesting that smooth activations , if configured properly , may be superior to ReLU in accuracy . Recent work by Xie et al . ( 2020 ) , that was done subsequently to our work , and was inspired from our results that we report in this paper , ( Lin & Shamir , 2019 ) , demonstrated also the advantage of smooth activations for adversarial training . Our Contributions : In this paper , we first demonstrate the advantages of smooth activations to reproducibility in deep networks . We show that not only can smooth activations improve accuracy of deep networks , they can also achieve superior tradeoffs between reproducibility and accuracy , by attaining a lower average Prediction Difference ( PD ) for the same or better accuracy . Smooth activations , like Swish , GELU , Mish , and TanhExp , all have a very similar non-monotonic form , that does not provide a clear stop region ( with strict 0 ; not only approaching 0 ) , and slope 1 region . While these activations approximate the mathematical form of ReLU , they lack these properties of ReLU . All these activations including SoftPlus also require more expensive mathematical expressions , involving exponents , and , in some , logarithms , or even numerically computed values ( e.g. , GELU ) . This can make deployment harder , especially with certain simplified hardware that supports only a limited number of operations , as well as can slow down training due to the heavier computations . Unlike ReLU , which can be transformed into Leaky ReLU , the smooth activations described can not be easily transformed into more general forms . In this work , we propose Smooth ReLU ( SmeLU ) , which is mathematically simple and based only on linear and quadratic expressions . It can be more easily deployed with limited hardware , and can provide faster training when hardware is limited . SmeLU provides a clearly defined 0 activation region , as well as a slope 1 region , is monotonic , and is also extendable to a leaky or more general form . SmeLU gives the good properties of smooth activations providing better reproducibility as well as better accuracy-reproducibility tradeoffs . Its generalized form allows even further accuracy improvements . The methodology to construct SmeLU is shown to be even more general , allowing for more complex smooth activations , which are all clustered under the category of Rectified Smooth Continuous Units ( RESCUs ) . Related Work : Ensembles ( Dietterich , 2000 ) have been used to reduce uncertainty ( Lakshminarayanan et al. , 2017 ) . They are useful also for reducing irreproducibility . However , they make models more complex , and can trade off accuracy in favor of reproducibility if one attempts to keep constant computation costs ( which require reducing capacity of each component of the ensemble ) . Compression of deep networks into smaller networks that attempt to describe the same information is the emerging area of distillation ( Hinton et al. , 2015 ) . Predictions of a strong teacher train a weaker student model . The student is then deployed . This approach is very common if there are ample training resources , but deployment is limited , as for mobile network devices . Co-distillation , proposed by Anil et al . ( 2018 ) ( see also Zhang et al . ( 2018 ) ) , took advantage of distillation to address irreproducibility . Instead of unidirectional transfer of knowledge , several models distill information between each other . They attempt to converge to the same solution . The method requires more training resources to co-train models , but deployment only requires a single model . A somewhat opposite approach ; Anti-Distillation , to address irreproducibility was proposed by Shamir & Coviello ( 2020 ) , embracing ensembles , with an additional loss that forces their components away from one another . Each component is forced to capture a ( more ) different part of the objective space , and as a whole , the predictions of the ensemble are more reproducible . To the best of our knowledge , all previously reported techniques to address irreproducibility in deep networks required some form of ensembles . In this work , we leverage smooth activations , and do not require ensembles . Outline : Section 2 proposes several PD metrics , which we use to measure irreproducibility . Next , we overview our setup and smooth activations in Section 3 , and describe SmeLU and its generalization in Section 4 . Experimental results are shown in Section 5 . 2 PREDICTION DIFFERENCE . Average individual per-example Prediction Difference ( PD ) over a set of models that are configured , trained , and supposed to be identical over some validation dataset can be defined in various ways . We refer to Shamir & Coviello ( 2020 ) for a more detailed discussion . We opt to the same definitions they used , where we describe the classification case , in which we measure the PD using some Lp norm on the actual label predictions . Following their notation , denote the number of models by M , and the number of validation examples by N . Let Pn , m be the distribution over labels predicted by model m for example n. ( Pn , m ( ` ) is the probability predicted for label ` . ) Let Pn 4 = ∑ m Pn , m/M be the expected distribution over labels over all M models . Then , the pth norm PD , ∆p , is given by ∆p = 1 N N∑ n=1 · 1 M M∑ m=1 ‖Pn , m − Pn‖p = 1 N N∑ n=1 · 1 M M∑ m=1 · [ ∑ ` |Pn , m ( ` ) − Pn ( ` ) |p ] 1/p . ( 1 ) With practical large scale systems , with very high training costs , we can use M = 2 , where for binary labels , ∆1 = 1/N · ∑ n |Pn,1 ( 1 ) − Pn,2 ( 1 ) | , where 1 is the positive label . As in Shamir & Coviello ( 2020 ) , we can consider relative PD , ∆r1 , normalizing the innermost summand in ( 1 ) by Pn ( ` ) , which for binary problems can be tweaked into ∆̃r1 , normalizing by Pn ( 1 ) instead . PD , ∆ L 1 , can be computed only on the observed true label , by replacing the innermost sum on ` in ( 1 ) by |Pn , m ( ` true ) − Pn ( ` true ) | for the true label , normalizing by Pn ( ` true ) . Its computation , however , requires knowledge of the true label . Finally , in classification , one can use Hamming PD ; ∆H , specifying the average fraction of labels predicted differently between model pairs .
This paper addresses the problem that deep neural networks (DNNs) can lead to different predictions (even when they are initialized the same way) due to the stochasticity of samples selected in mini-batch SGD and update procedures from different optimizers, which leads to convergence to different regions along the loss surface. They attribute this problem to the complicated loss surface that arises from the discontinuity in relu activations. They show that smooth activations can help remedy this issue, by tuning the activation to become more relu-like, which leads to a better tradeoff between prediction differences (i.e. consistency) and model accuracy.
SP:ce4275ab9437fd5c73c61a5ff17ed24881fdd717
Smooth Activations and Reproducibility in Deep Networks
1 INTRODUCTION . Recent developments in deep learning leave no question about the advantages of deep networks over classical methods , which relied heavily on linear convex optimization solutions . With their astonishing unprecedented success , deep models are providing solutions to a continuously increasing number of domains in our lives . These solutions , however , while much more accurate than their convex counterparts , are usually irreproducible in the predictions they provide . While average accuracy of deep models on some validation dataset is usually much higher than that of linear convex models , predictions on individual examples of two models , that were trained to be identical , may diverge substantially , exhibiting Prediction Differences that may be as high as non-negligible fractions of the actual predictions ( see , e.g. , Chen et al . ( 2020 ) ; Dusenberry et al . ( 2020 ) ) . Deep networks express ( only ) what they learned . Like humans , they may establish different beliefs as function of the order in which they had seen training data ( Achille et al. , 2017 ; Bengio et al. , 2009 ) . Due to the huge amounts of data required to train such models , enforcing determinism ( Nagarajan et al. , 2018 ) may not be an option . Deep networks may be trained on highly distributed , parallelized systems . Thus two supposedly identical models , with the same architecture , parameters , training algorithm and training hyper-parameters that are trained on the same training dataset , even if they are initialized identically , will exhibit some randomness in the order in which they see the training set and apply updates . Due to the highly non-convex objective , such models may converge to different optima , which may exhibit equal average objective , but can provide very different predictions to individual examples . Irreproducibility in deep models is not the classical type of epistemic uncertainty , widely studied in the literature , nor is it overfitting . It differs from these phenomena in several ways : It does not diminish with more training examples like classical epistemic uncertainty , and it does not cause degradation to test accuracy by overfitting unseen data to the training examples . While irreproducibilty may be acceptable for some applications , it can be very detrimental in applications , such as medical ones , where two different diagnoses to the same symptoms may be unacceptable . Furthermore , in online and/or re-enforcement systems , which rely on their predictions to determine actions , that in turn , determine the remaining training examples , even small initial irreproducibility can cause large divergence of models that are supposed to be identical . One example is sponsored advertisement online Click-Through-Rate ( CTR ) prediction ( McMahan et al. , 2013 ) . The effect of irreproducibility in CTR prediction can go far beyond changing the predicted CTR of an example , as it may affect actions that take place downstream in a complex system . Reproducibility is a problem even if one trains only a single model , as it may be impossible to determine whether the trained model provides acceptable solutions for applications that can not tolerate unacceptable ones . A major factor to the unprecedented success of deep networks in recent years has been the Rectified Linear Unit ( ReLU ) activation , ( Nair & Hinton , 2010 ) . ReLU nonlinearity together with back-propagation give simple updates , accompanied with superior accuracy . ReLU thus became the undisputed activation used in deep learning . However , is ReLU really the best to use ? While it gives better optima than those achieved with simple convex models , it imposes an extremely nonconvex objective surface with many such optima . The direction of a gradient update with a gradient based optimizer is determined by the specific example which generates the update . Thus the order of seeing examples or applying updates can determine which optimum is reached . Many such optima , as imposed by ReLU , thus provide a recipe for irreproducibility . In recent years , different works started challenging the dominance of ReLU , exploring alternatives . Overviews of various activations were reported in Nwankpa et al . ( 2018 ) ; Pedamonti ( 2018 ) . Variations on ReLU were studied in Jin et al . ( 2015 ) . Activations like SoftPlus ( Zheng et al. , 2015 ) , Exponential Linear Unit ( ELU ) ( Clevert et al. , 2015 ) , Scaled Exponential Linear Unit ( SELU ) ( Klambauer et al. , 2017 ; Sakketou & Ampazis , 2019 ; Wang et al. , 2017 ) , or Continuously differentiable Exponential Linear Unit ( CELU ) ( Barron , 2017 ) were proposed , as well as the Gaussian Error Linear Unit ( GELU ) ( Hendrycks & Gimpel , 2016 ) . Specifically , the Swish activation ( Ramachandran et al. , 2017 ) ( that can approximate GELU ) was found through automated search to achieve superior accuracy to ReLU . Further activations with similarity to GELU were proposed recently ; Mish ( Misra , 2019 ) , and TanhExp ( Liu & Di , 2020 ) . Unlike ReLU , many of these activations are smooth with continuous gradient . Good properties of smooth activations were studied as early as Mhaskar ( 1997 ) ( see also Du ( 2019 ) ; Lokhande et al . ( 2020 ) ) . These series of papers started suggesting that smooth activations , if configured properly , may be superior to ReLU in accuracy . Recent work by Xie et al . ( 2020 ) , that was done subsequently to our work , and was inspired from our results that we report in this paper , ( Lin & Shamir , 2019 ) , demonstrated also the advantage of smooth activations for adversarial training . Our Contributions : In this paper , we first demonstrate the advantages of smooth activations to reproducibility in deep networks . We show that not only can smooth activations improve accuracy of deep networks , they can also achieve superior tradeoffs between reproducibility and accuracy , by attaining a lower average Prediction Difference ( PD ) for the same or better accuracy . Smooth activations , like Swish , GELU , Mish , and TanhExp , all have a very similar non-monotonic form , that does not provide a clear stop region ( with strict 0 ; not only approaching 0 ) , and slope 1 region . While these activations approximate the mathematical form of ReLU , they lack these properties of ReLU . All these activations including SoftPlus also require more expensive mathematical expressions , involving exponents , and , in some , logarithms , or even numerically computed values ( e.g. , GELU ) . This can make deployment harder , especially with certain simplified hardware that supports only a limited number of operations , as well as can slow down training due to the heavier computations . Unlike ReLU , which can be transformed into Leaky ReLU , the smooth activations described can not be easily transformed into more general forms . In this work , we propose Smooth ReLU ( SmeLU ) , which is mathematically simple and based only on linear and quadratic expressions . It can be more easily deployed with limited hardware , and can provide faster training when hardware is limited . SmeLU provides a clearly defined 0 activation region , as well as a slope 1 region , is monotonic , and is also extendable to a leaky or more general form . SmeLU gives the good properties of smooth activations providing better reproducibility as well as better accuracy-reproducibility tradeoffs . Its generalized form allows even further accuracy improvements . The methodology to construct SmeLU is shown to be even more general , allowing for more complex smooth activations , which are all clustered under the category of Rectified Smooth Continuous Units ( RESCUs ) . Related Work : Ensembles ( Dietterich , 2000 ) have been used to reduce uncertainty ( Lakshminarayanan et al. , 2017 ) . They are useful also for reducing irreproducibility . However , they make models more complex , and can trade off accuracy in favor of reproducibility if one attempts to keep constant computation costs ( which require reducing capacity of each component of the ensemble ) . Compression of deep networks into smaller networks that attempt to describe the same information is the emerging area of distillation ( Hinton et al. , 2015 ) . Predictions of a strong teacher train a weaker student model . The student is then deployed . This approach is very common if there are ample training resources , but deployment is limited , as for mobile network devices . Co-distillation , proposed by Anil et al . ( 2018 ) ( see also Zhang et al . ( 2018 ) ) , took advantage of distillation to address irreproducibility . Instead of unidirectional transfer of knowledge , several models distill information between each other . They attempt to converge to the same solution . The method requires more training resources to co-train models , but deployment only requires a single model . A somewhat opposite approach ; Anti-Distillation , to address irreproducibility was proposed by Shamir & Coviello ( 2020 ) , embracing ensembles , with an additional loss that forces their components away from one another . Each component is forced to capture a ( more ) different part of the objective space , and as a whole , the predictions of the ensemble are more reproducible . To the best of our knowledge , all previously reported techniques to address irreproducibility in deep networks required some form of ensembles . In this work , we leverage smooth activations , and do not require ensembles . Outline : Section 2 proposes several PD metrics , which we use to measure irreproducibility . Next , we overview our setup and smooth activations in Section 3 , and describe SmeLU and its generalization in Section 4 . Experimental results are shown in Section 5 . 2 PREDICTION DIFFERENCE . Average individual per-example Prediction Difference ( PD ) over a set of models that are configured , trained , and supposed to be identical over some validation dataset can be defined in various ways . We refer to Shamir & Coviello ( 2020 ) for a more detailed discussion . We opt to the same definitions they used , where we describe the classification case , in which we measure the PD using some Lp norm on the actual label predictions . Following their notation , denote the number of models by M , and the number of validation examples by N . Let Pn , m be the distribution over labels predicted by model m for example n. ( Pn , m ( ` ) is the probability predicted for label ` . ) Let Pn 4 = ∑ m Pn , m/M be the expected distribution over labels over all M models . Then , the pth norm PD , ∆p , is given by ∆p = 1 N N∑ n=1 · 1 M M∑ m=1 ‖Pn , m − Pn‖p = 1 N N∑ n=1 · 1 M M∑ m=1 · [ ∑ ` |Pn , m ( ` ) − Pn ( ` ) |p ] 1/p . ( 1 ) With practical large scale systems , with very high training costs , we can use M = 2 , where for binary labels , ∆1 = 1/N · ∑ n |Pn,1 ( 1 ) − Pn,2 ( 1 ) | , where 1 is the positive label . As in Shamir & Coviello ( 2020 ) , we can consider relative PD , ∆r1 , normalizing the innermost summand in ( 1 ) by Pn ( ` ) , which for binary problems can be tweaked into ∆̃r1 , normalizing by Pn ( 1 ) instead . PD , ∆ L 1 , can be computed only on the observed true label , by replacing the innermost sum on ` in ( 1 ) by |Pn , m ( ` true ) − Pn ( ` true ) | for the true label , normalizing by Pn ( ` true ) . Its computation , however , requires knowledge of the true label . Finally , in classification , one can use Hamming PD ; ∆H , specifying the average fraction of labels predicted differently between model pairs .
The paper claims that smooth activations are more reproducible than ReLU. The accuracy gain claims seem marginal and not carefully carried out, further ablation studies are needed to strengthen the conclusion on accuracy. However, the main point of the paper is reproducibility where the feature is measured by the ‘Prediction Difference’. PD (introduced in section 2) is a measure over a set of models where the PD score is low if the models output consistent estimates for the same validation samples.
SP:ce4275ab9437fd5c73c61a5ff17ed24881fdd717
DynamicVAE: Decoupling Reconstruction Error and Disentangled Representation Learning
1 INTRODUCTION . The goal of disentangled representation learning is to encode input data into a low-dimensional space that preserves information about the salient factors of variation , so that each dimension of the representation corresponds to a distinct factor in the data ( Bengio et al. , 2013 ; Locatello et al. , 2020 ; van Steenkiste et al. , 2019 ) . Learning disentangled representations benefits a variety of downstream tasks ( Higgins et al. , 2018 ; Lake et al. , 2017 ; Locatello et al. , 2019c ; a ; Denton et al. , 2017 ; Mathieu et al. , 2019 ) , including abstract visual reasoning ( van Steenkiste et al. , 2019 ) , zero-shot transfer learning ( Burgess et al. , 2018 ; Lake et al. , 2017 ; Higgins et al. , 2017a ) and image generation ( Nie et al. , 2020 ) , just to name a few . Due to its central importance in various downstream applications , there is abundant literature on learning disentangled representations . Roughly speaking , there are two lines of methods towards this goal . The first category includes supervised methods ( Chen & Batmanghelich , 2019 ; Locatello et al. , 2019c ; Shu et al. , 2019 ; Bouchacourt et al. , 2018 ; Nie et al. , 2020 ; Yang et al. , 2015 ) , where external supervision ( e.g. , data generative factors ) is available during training to guide the learning of disentangled representations . The second line of works focus on unsupervised methods ( Chen et al. , 2016 ; 2018 ; Burgess et al. , 2018 ; Kim & Mnih , 2018 ; Denton et al. , 2017 ; Kumar et al. , 2018 ; Fraccaro et al. , 2017 ) , which substantially relieve the needs to have external supervisions . For this reason , in this paper , we mainly focus on unsupervised disentangled representation learning . One major challenge of unsupervised disentanglement learning is that there exists a trade-off between reconstruction quality of the input signal and the degree of disentanglement in the latent representations . Let us take β-VAE and its variants ( Burgess et al. , 2018 ; Chen et al. , 2018 ; Higgins et al. , 2017a ) as an example . These methods assign a large and fixed weight β in the objective function to improve the disentanglement at the cost of reconstruction quality , which is highly correlated with accuracy in downstream tasks ( van Steenkiste et al. , 2019 ; Locatello et al. , 2020 ) . In order to improve the reconstruction quality , researchers have proposed a dynamic learning approach , ControlVAE ( Shao et al. , 2020 ) , to dynamically adjust the weight on the KL term in the VAE objective to better balance the quality of disentangled representation learning and reconstruction error . However , while ControlVAE allows better control of the trade-off between disentangled representation learning and reconstruction error , it does not eliminate it . One is still achieved at the expense of the other . The contribution of this paper , compared to the above state of the art , lies in demonstrating that with the proper design , the trade-off between disentangled representation learning and reconstruction error is completely eliminated . Both objectives can be attained at the same time in a decoupled fashion , without affecting each other . More specifically , we observe that if β was kept high in the beginning of training then lowered later in the process , the two objectives are decoupled allowing each to be independently optimized . To the authors ’ knowledge , this work is the first to attain such decoupled optimization of both quality of disentanglement and reconstruction error . Our Contributions : In this paper , we propose a novel unsupervised disentangled representation learning method , dubbed as DynamicVAE , that turns the weight of β-VAE ( β > 1 ) ( Burgess et al. , 2018 ; Higgins et al. , 2017a ) into a small value ( β ≤ 1 ) to achieve not only good disentanglement but also a high reconstruction accuracy via dynamic control . We summarize the main contributions of this paper as follows . • We propose a new model , DynamicVAE , that leverages an incremental PI controller and moving average to evolve the desired KL-divergence along a trajectory that enables decoupling of two objectives : high-quality disentanglement and low reconstruction error . • We provide the theoretical conditions on parameters of the PI controller to guarantee stability of DynamicVAE . • We experimentally demonstrate that our approach turns the weight of β-VAE ( β > 1 ) to β ≤ 1 , achieving higher reconstruction quality yet comparable disentanglement compared to prior approaches ( e.g. , FactorVAE ) . Thus , our results verify that the proposed method indeed decouples disentanglement and reconstruction accuracy without hurting each other ’ s performance . 2 PRELIMINARIES . β-VAE and its Variants : β-VAE ( Higgins et al. , 2017b ; Chen et al. , 2018 ) is a popular unsupervised method for learning disentangled representations of the data generative factors ( Bengio et al. , 2013 ) . Compared to the original VAE , β-VAE incorporates an extra hyperparameter β ( β > 1 ) as the weight of the KL term in the VAE objective : Lβ = Eqφ ( z|x ) [ log pθ ( x|z ) ] − βDKL ( qφ ( z|x ) ‖ p ( z ) ) . ( 1 ) In order to discover more disentangled factors , in other variants , practitioners further add a constraint on the total information capacity , C , to control the capacity of the latent channels ( Burgess et al. , 2018 ) to transmit information . The constraint can be formulated as an optimization method : Lβ = Eqφ ( z|x ) [ log pθ ( x|z ) ] − β · |DKL ( qφ ( z|x ) ‖p ( z ) ) − C| , ( 2 ) where β is a large and fixed hyperparameter . As a result , when the weight β is large ( e.g . 100 ) , the algorithm tends to optimize the second term in ( 2 ) , leading to much higher reconstruction error . PID Control Algorithm : The PID is a simple yet effective control algorithm that can stabilize system output to a desired value via feedback control ( Stooke et al. , 2020 ; Åström et al. , 2006 ) . The PID algorithm calculates an error , e ( t ) , between a set point ( in this case , the desired KL-divergence ) and the current value of the controlled variable ( in this case , the actual KL-divergence ) , then applies a correction in a direction that reduces that error . The correction is the weighted sum of three terms , one proportional to the error ( called P ) , one that is the integral of error ( called I ) , and one that is the derivative of error ( called D ) ; thus , the term PID . The derivative term is not recommended for noisy systems , such as ours , reducing the algorithm to PI control . The canonical form of a PI controller ( applied to control β ( t ) ) is the following : β ( t ) = Kpe ( t ) +Ki t∑ j=0 e ( j ) , ( 3 ) where β ( t ) is the output of a controller , which ( in our case ) is the used β during training at time t ; e ( t ) is the error between the output value and the desired value at time t ; Kp , Ki denote the coefficients for the P term and I term , respectively . Eq . ( 3 ) may be rewritten in incremental form , as follows : β ( t ) = ∆β ( t ) + β ( t− 1 ) , ( 4 ) where β ( 0 ) can be set as needed ( as we show later ) , and : ∆β ( t ) = Kp [ e ( t ) − e ( t− 1 ) ] +Kie ( t ) . ( 5 ) This paper adopts a nonlinear incremental form of the PI controller , described later in Section 3 . 3 THE DYNAMICVAE ALGORITHM . The goal of disentangled representation learning ( Burgess et al. , 2018 ) is to maximize the log likelihood and simultaneously stabilize the KL-divergence to a target value C. It can be formulated as the following constrained optimization problem : max φ , θ Eqφ ( z|x ) [ log pθ ( x|z ) ] , s.t . DKL ( qφ ( z|x ) ‖ p ( z ) ) = C ( 6 ) In order to achieve a good trade-off between disentanglement and reconstruction accuracy , we attempt to design a controller to dynamically adjust β ( t ) in the following VAE objective to stabilize the KL-divergence to the desired value C : Ld = Eqφ ( z|x ) [ log pθ ( x|z ) ] − β ( t ) DKL ( qφ ( z|x ) ‖ p ( z ) ) . ( 7 ) The contribution of DynamicVAE is to evolve β ( t ) along a good trajectory to achieve decoupling between disentanglement and reconstruction error . To reach this goal , we need to address the following two challenges : 1. β ( t ) should dynamically change from a large value to small one . Specifically , at the beginning of training , β ( t ) should be large enough to disentangle latent factors . After that , β ( t ) is required to gradually drop to a small value to optimize the reconstruction . 2. β ( t ) should not change too fast or oscillates too frequently . When β ( t ) drops too fast or oscillates , it may cause KL-divergence to grow with a large value . Consequently , some latent factors may emerge earlier so that they can potentially be entangled with each other . In this paper , we propose methods to deal with these two challenges , summarized below . A non-linear incremental PI controller : Fig . 1 ( a ) shows the designed non-linear PI controller that dynamically adjusts the weight β ( t ) in the KL term of the β-VAE , based on the actual KL-divergence , yKL ( t ) . Specifically , it first samples the output KL-divergence , yKL ( t ) , at training step t. Then we use the difference e ( t ) between the sampled KL-divergence at time t with the desired value , C , as the feedback to PI controller to tune β ( t ) . The corresponding PI algorithm is denoted by β ( t ) = Kpσ ( −e ( t ) ) −Ki t∑ j=0 e ( j ) , ( 8 ) where σ ( . ) is a sigmoid function ; Kp , Ki are positive hyper-parameters for P and I term respectively . As mentioned earlier , we need a large β ( t ) in the beginning to control the KL-divergence from a small value to a large target value so that the information can be transmitted through the latent channels per data sample . Accordingly , we adopt an incremental form of the PI controller in Eq . ( 8 ) , and initialize it to a large value : β ( t ) = ∆β ( t ) + β ( t− 1 ) , ( 9 ) where ∆β ( t ) = Kp [ σ ( −e ( t ) ) − σ ( −e ( t− 1 ) ) ] −Kie ( t ) . ( 10 ) and β ( 0 ) is a large initial value . When the PI controller is initialized to a large value β ( 0 ) , it can quickly produce a ( small ) KL-divergence during initial model training , preventing emergence of entangled factors . Moving average : Since our model is trained with mini-batch data , it often contains noise that causes β ( t ) to oscillate . In particular , when β ( t ) plunges during training , it would cause KL-divergence to rise too quickly . This may lead to multiple latent factors coming out together to be entangled . To mitigate this issue , we adopt moving average to smooth the output KL-divergence as the feedback of PI controller below . y ( t ) = αtyKL ( t ) + αt−1yKL ( t− 1 ) + · · ·+ αt−T yKL ( t− T ) = t∑ i=t−T αiyKL ( i ) , ( 11 ) where αi denotes weight and T denotes the window size of past training steps . Hybrid annealing : Control systems with step ( input ) function ( i.e. , those where the set point can change abruptly ) often suffer from an overshoot problem . An overshoot is temporary overcompensation , where the controlled variable oscillates around the set point . In our case , it means that the actual KL-divergence may significantly ( albeit temporarily ) exceed the desired value , when set point is abruptly changed . This effect would cause some latent factors to come out earlier than expected , and be entangled , thereby producing poor-quality disentanglement . To address this problem , we develop a hybrid annealing method that changes the set point more gradually , as illustrated in Fig . 7 in Appendix . It combines step function with ramp function to smoothly increase the target KL-divergence in order to prevent overshoot and thus better disentangle latent factors one by one . The combination of the above three methods allows DynamicVAE to evolve β ( t ) along a favorable trajectory to separate disentanglement learning and reconstruction optimization . We summarize the proposed incremental PI algorithm in Algorithm 1 in Appendix B .
In $\beta$-VAE, one challenge is to choose the hyper-parameter $\beta$ that controls the trade-off between the reconstruction quality and the disentanglement. This paper proposes a method called DynamicVAE. Rather than using a fixed hyperparameter $\beta$, the method leverages a modified incremental Proportional-integral (PI) controller, which dynamically tunes $\beta$ at different stages of training. The method is tested on benchmark datasets.
SP:86c007cb8af744f84b03a3a424da380984309ae8
DynamicVAE: Decoupling Reconstruction Error and Disentangled Representation Learning
1 INTRODUCTION . The goal of disentangled representation learning is to encode input data into a low-dimensional space that preserves information about the salient factors of variation , so that each dimension of the representation corresponds to a distinct factor in the data ( Bengio et al. , 2013 ; Locatello et al. , 2020 ; van Steenkiste et al. , 2019 ) . Learning disentangled representations benefits a variety of downstream tasks ( Higgins et al. , 2018 ; Lake et al. , 2017 ; Locatello et al. , 2019c ; a ; Denton et al. , 2017 ; Mathieu et al. , 2019 ) , including abstract visual reasoning ( van Steenkiste et al. , 2019 ) , zero-shot transfer learning ( Burgess et al. , 2018 ; Lake et al. , 2017 ; Higgins et al. , 2017a ) and image generation ( Nie et al. , 2020 ) , just to name a few . Due to its central importance in various downstream applications , there is abundant literature on learning disentangled representations . Roughly speaking , there are two lines of methods towards this goal . The first category includes supervised methods ( Chen & Batmanghelich , 2019 ; Locatello et al. , 2019c ; Shu et al. , 2019 ; Bouchacourt et al. , 2018 ; Nie et al. , 2020 ; Yang et al. , 2015 ) , where external supervision ( e.g. , data generative factors ) is available during training to guide the learning of disentangled representations . The second line of works focus on unsupervised methods ( Chen et al. , 2016 ; 2018 ; Burgess et al. , 2018 ; Kim & Mnih , 2018 ; Denton et al. , 2017 ; Kumar et al. , 2018 ; Fraccaro et al. , 2017 ) , which substantially relieve the needs to have external supervisions . For this reason , in this paper , we mainly focus on unsupervised disentangled representation learning . One major challenge of unsupervised disentanglement learning is that there exists a trade-off between reconstruction quality of the input signal and the degree of disentanglement in the latent representations . Let us take β-VAE and its variants ( Burgess et al. , 2018 ; Chen et al. , 2018 ; Higgins et al. , 2017a ) as an example . These methods assign a large and fixed weight β in the objective function to improve the disentanglement at the cost of reconstruction quality , which is highly correlated with accuracy in downstream tasks ( van Steenkiste et al. , 2019 ; Locatello et al. , 2020 ) . In order to improve the reconstruction quality , researchers have proposed a dynamic learning approach , ControlVAE ( Shao et al. , 2020 ) , to dynamically adjust the weight on the KL term in the VAE objective to better balance the quality of disentangled representation learning and reconstruction error . However , while ControlVAE allows better control of the trade-off between disentangled representation learning and reconstruction error , it does not eliminate it . One is still achieved at the expense of the other . The contribution of this paper , compared to the above state of the art , lies in demonstrating that with the proper design , the trade-off between disentangled representation learning and reconstruction error is completely eliminated . Both objectives can be attained at the same time in a decoupled fashion , without affecting each other . More specifically , we observe that if β was kept high in the beginning of training then lowered later in the process , the two objectives are decoupled allowing each to be independently optimized . To the authors ’ knowledge , this work is the first to attain such decoupled optimization of both quality of disentanglement and reconstruction error . Our Contributions : In this paper , we propose a novel unsupervised disentangled representation learning method , dubbed as DynamicVAE , that turns the weight of β-VAE ( β > 1 ) ( Burgess et al. , 2018 ; Higgins et al. , 2017a ) into a small value ( β ≤ 1 ) to achieve not only good disentanglement but also a high reconstruction accuracy via dynamic control . We summarize the main contributions of this paper as follows . • We propose a new model , DynamicVAE , that leverages an incremental PI controller and moving average to evolve the desired KL-divergence along a trajectory that enables decoupling of two objectives : high-quality disentanglement and low reconstruction error . • We provide the theoretical conditions on parameters of the PI controller to guarantee stability of DynamicVAE . • We experimentally demonstrate that our approach turns the weight of β-VAE ( β > 1 ) to β ≤ 1 , achieving higher reconstruction quality yet comparable disentanglement compared to prior approaches ( e.g. , FactorVAE ) . Thus , our results verify that the proposed method indeed decouples disentanglement and reconstruction accuracy without hurting each other ’ s performance . 2 PRELIMINARIES . β-VAE and its Variants : β-VAE ( Higgins et al. , 2017b ; Chen et al. , 2018 ) is a popular unsupervised method for learning disentangled representations of the data generative factors ( Bengio et al. , 2013 ) . Compared to the original VAE , β-VAE incorporates an extra hyperparameter β ( β > 1 ) as the weight of the KL term in the VAE objective : Lβ = Eqφ ( z|x ) [ log pθ ( x|z ) ] − βDKL ( qφ ( z|x ) ‖ p ( z ) ) . ( 1 ) In order to discover more disentangled factors , in other variants , practitioners further add a constraint on the total information capacity , C , to control the capacity of the latent channels ( Burgess et al. , 2018 ) to transmit information . The constraint can be formulated as an optimization method : Lβ = Eqφ ( z|x ) [ log pθ ( x|z ) ] − β · |DKL ( qφ ( z|x ) ‖p ( z ) ) − C| , ( 2 ) where β is a large and fixed hyperparameter . As a result , when the weight β is large ( e.g . 100 ) , the algorithm tends to optimize the second term in ( 2 ) , leading to much higher reconstruction error . PID Control Algorithm : The PID is a simple yet effective control algorithm that can stabilize system output to a desired value via feedback control ( Stooke et al. , 2020 ; Åström et al. , 2006 ) . The PID algorithm calculates an error , e ( t ) , between a set point ( in this case , the desired KL-divergence ) and the current value of the controlled variable ( in this case , the actual KL-divergence ) , then applies a correction in a direction that reduces that error . The correction is the weighted sum of three terms , one proportional to the error ( called P ) , one that is the integral of error ( called I ) , and one that is the derivative of error ( called D ) ; thus , the term PID . The derivative term is not recommended for noisy systems , such as ours , reducing the algorithm to PI control . The canonical form of a PI controller ( applied to control β ( t ) ) is the following : β ( t ) = Kpe ( t ) +Ki t∑ j=0 e ( j ) , ( 3 ) where β ( t ) is the output of a controller , which ( in our case ) is the used β during training at time t ; e ( t ) is the error between the output value and the desired value at time t ; Kp , Ki denote the coefficients for the P term and I term , respectively . Eq . ( 3 ) may be rewritten in incremental form , as follows : β ( t ) = ∆β ( t ) + β ( t− 1 ) , ( 4 ) where β ( 0 ) can be set as needed ( as we show later ) , and : ∆β ( t ) = Kp [ e ( t ) − e ( t− 1 ) ] +Kie ( t ) . ( 5 ) This paper adopts a nonlinear incremental form of the PI controller , described later in Section 3 . 3 THE DYNAMICVAE ALGORITHM . The goal of disentangled representation learning ( Burgess et al. , 2018 ) is to maximize the log likelihood and simultaneously stabilize the KL-divergence to a target value C. It can be formulated as the following constrained optimization problem : max φ , θ Eqφ ( z|x ) [ log pθ ( x|z ) ] , s.t . DKL ( qφ ( z|x ) ‖ p ( z ) ) = C ( 6 ) In order to achieve a good trade-off between disentanglement and reconstruction accuracy , we attempt to design a controller to dynamically adjust β ( t ) in the following VAE objective to stabilize the KL-divergence to the desired value C : Ld = Eqφ ( z|x ) [ log pθ ( x|z ) ] − β ( t ) DKL ( qφ ( z|x ) ‖ p ( z ) ) . ( 7 ) The contribution of DynamicVAE is to evolve β ( t ) along a good trajectory to achieve decoupling between disentanglement and reconstruction error . To reach this goal , we need to address the following two challenges : 1. β ( t ) should dynamically change from a large value to small one . Specifically , at the beginning of training , β ( t ) should be large enough to disentangle latent factors . After that , β ( t ) is required to gradually drop to a small value to optimize the reconstruction . 2. β ( t ) should not change too fast or oscillates too frequently . When β ( t ) drops too fast or oscillates , it may cause KL-divergence to grow with a large value . Consequently , some latent factors may emerge earlier so that they can potentially be entangled with each other . In this paper , we propose methods to deal with these two challenges , summarized below . A non-linear incremental PI controller : Fig . 1 ( a ) shows the designed non-linear PI controller that dynamically adjusts the weight β ( t ) in the KL term of the β-VAE , based on the actual KL-divergence , yKL ( t ) . Specifically , it first samples the output KL-divergence , yKL ( t ) , at training step t. Then we use the difference e ( t ) between the sampled KL-divergence at time t with the desired value , C , as the feedback to PI controller to tune β ( t ) . The corresponding PI algorithm is denoted by β ( t ) = Kpσ ( −e ( t ) ) −Ki t∑ j=0 e ( j ) , ( 8 ) where σ ( . ) is a sigmoid function ; Kp , Ki are positive hyper-parameters for P and I term respectively . As mentioned earlier , we need a large β ( t ) in the beginning to control the KL-divergence from a small value to a large target value so that the information can be transmitted through the latent channels per data sample . Accordingly , we adopt an incremental form of the PI controller in Eq . ( 8 ) , and initialize it to a large value : β ( t ) = ∆β ( t ) + β ( t− 1 ) , ( 9 ) where ∆β ( t ) = Kp [ σ ( −e ( t ) ) − σ ( −e ( t− 1 ) ) ] −Kie ( t ) . ( 10 ) and β ( 0 ) is a large initial value . When the PI controller is initialized to a large value β ( 0 ) , it can quickly produce a ( small ) KL-divergence during initial model training , preventing emergence of entangled factors . Moving average : Since our model is trained with mini-batch data , it often contains noise that causes β ( t ) to oscillate . In particular , when β ( t ) plunges during training , it would cause KL-divergence to rise too quickly . This may lead to multiple latent factors coming out together to be entangled . To mitigate this issue , we adopt moving average to smooth the output KL-divergence as the feedback of PI controller below . y ( t ) = αtyKL ( t ) + αt−1yKL ( t− 1 ) + · · ·+ αt−T yKL ( t− T ) = t∑ i=t−T αiyKL ( i ) , ( 11 ) where αi denotes weight and T denotes the window size of past training steps . Hybrid annealing : Control systems with step ( input ) function ( i.e. , those where the set point can change abruptly ) often suffer from an overshoot problem . An overshoot is temporary overcompensation , where the controlled variable oscillates around the set point . In our case , it means that the actual KL-divergence may significantly ( albeit temporarily ) exceed the desired value , when set point is abruptly changed . This effect would cause some latent factors to come out earlier than expected , and be entangled , thereby producing poor-quality disentanglement . To address this problem , we develop a hybrid annealing method that changes the set point more gradually , as illustrated in Fig . 7 in Appendix . It combines step function with ramp function to smoothly increase the target KL-divergence in order to prevent overshoot and thus better disentangle latent factors one by one . The combination of the above three methods allows DynamicVAE to evolve β ( t ) along a favorable trajectory to separate disentanglement learning and reconstruction optimization . We summarize the proposed incremental PI algorithm in Algorithm 1 in Appendix B .
This paper introduces a strategy for controlling the beta value of a beta-VAE during training using approaches from control theory that allow it to target a designated level of KL divergence between the encoder distribution and the prior. This is done in a way that aims to achieve good reconstructions while maintaining disentangling performance. It can be viewed as a refinement of the ControlVAE approach of Shao et al 2020, varying only in the specifics of the strategy used.
SP:86c007cb8af744f84b03a3a424da380984309ae8
Randomized Entity-wise Factorization for Multi-Agent Reinforcement Learning
1 INTRODUCTION Many real-world multi-agent tasks contain scenarios in which an agent must deal with varying numbers and/or types of cooperative agents , antagonist enemies or other entities . Agents , however , can often select their optimal actions while ignoring a subset of agents/entities . For example , in the sport of soccer , a “ breakaway ” occurs when an attacker with the ball passes the defense and only needs to beat the goalkeeper in order to score ( see Figure 1 ) . In this situation , only the opposing goalkeeper is immediately relevant to the attacker ’ s success , so the attacker can safely ignore players other than the goalkeeper for the time being . By ignoring irrelevant context , the attacker can generalize this experience better to its next breakaway . Furthermore , soccer takes many forms , from casual 5 vs. 5 to full scale 11 vs. 11 matches , and breakaways occur in all . If agents can identify independent patterns of behavior such as breakaways , they should be able to learn more efficiently as well as share their experiences across all forms of soccer . Value function factoring approaches attempt to leverage independences between agents , such as those in our soccer example , by learning value functions as a combination of independent factors that depend on disjunct subsets of the state and action spaces ( Koller & Parr , 1999 ) . These subsets are typically fixed in advance using domain knowledge about the problem at hand , and thus are not scalable to complex domains where dependencies are unknown and may shift over time . Recent approaches in cooperative deep multi-agent reinforcement learning ( MARL ) factor value functions into separate components for each agent ’ s action and observation space in order to enable decentralized execution ( e.g. , VDN ( Sunehag et al. , 2018 ) , QMIX ( Rashid et al. , 2018 ) ) . These approaches learn a utility function for each agent that only depends on the agent ’ s own action and its observations . The global Q-value is then predicted as some monotonic combination of these utilities in order to allow agents to greedily select their actions with local information while maximizing the global Q . These approaches are able to effectively leverage independence between agents ’ local actions and observations , however , we note that observable entities are provided by the environment and are not all necessarily relevant to an agent ’ s value function . We build on these recent approaches by additionally factoring the observation space of each agent into factors for sub-groups of observed entities . Unlike classic works which factor the state or observation spaces , our work does not depend on fixed subsets of features designated through domain knowledge . Instead , we propose to randomly select sub-groups of observed entities and “ imagine ” the predicted utilities within these groups for each agent . These terms will not account for potential interactions outside of the groups , so we include additional factors that estimate the effect of the entities outside of each sub-group on each agent ’ s utility . In order to estimate the true returns , we combine all factors using a mixing network ( as in QMIX , Rashid et al. , 2018 ) , which allows our model to weight factors based on the full state context . We hypothesize this approach is beneficial for two reasons : 1 ) randomly partitioning entities and predicting returns from disjunct factors allows our model to explore all possible independence relationships among agents and entities , teaching agents to ignore irrelevant context when possible and 2 ) by teaching our models when they can ignore irrelevant context , they will learn more efficiently across varied settings that share common patterns of behavior , such as breakaways in soccer . The loss for training randomized factorization is added to the QMIX loss ( i.e. , using full observations ) as an auxiliary objective . Our reasoning is again twofold : 1 ) we must learn the true returns to use as a target prediction for a Q-learning loss . 2 ) we do not know a priori which entities are unnecessary and thus need to learn policies that act on full observations . Our entity-wise factoring procedure can be implemented easily in practice by using a simple masking procedure in attention-based models . Furthermore , by leveraging attention models , we can apply our approach to domains with varying entity quantities . Just as a soccer agent experiencing a breakaway can generalize their behavior across settings ( 5 vs. 5 , 11 vs. 11 , etc . ) if they ignore irrelevant context , we hypothesize that our approach will improve performance across settings with variable agent and entity configurations . We propose Randomized Entity-wise Factorization for Imagined Learning ( REFIL ) and test on complex StarCraft Multi-Agent Challenge ( SMAC ) ( Samvelyan et al. , 2019 ) tasks with varying agent types and quantities , finding it attains improved performance over state-of-the-art methods . 2 BACKGROUND AND PRELIMINARIES . In this work , we consider the decentralized partially observable Markov decision process ( DecPOMDP ) ( Oliehoek et al. , 2016 ) , which describes fully cooperative multi-agent tasks . Specifically , we utilize the setting of Dec-POMDPs with entities ( Schroeder de Witt et al. , 2019 ) . Dec-POMDPs with Entities are described as tuples : ( S , U , O , P , r , E , A , Φ , µ ) . E is the set of entities in the environment . Each entity e has a state representation se , and the global state is the set s = { se|e ∈ E } ∈ S. Some entities can be agents a ∈ A ⊆ E . Non-agent entities are parts of the environment that are not controlled by learning policies ( e.g. , landmarks , obstacles , agents with fixed behavior ) . The state features of each entity comprise of two parts : se = [ fe , φe ] where fe represents the description of an entity ’ s current state ( e.g. , position , orientation , velocity , etc . ) while φe ∈ Φ represents the entity ’ s type ( e.g. , outfield player , goalkeeper , etc . ) , of which there are a discrete set . An entity ’ s type affects the state dynamics as well as the reward function and , importantly , it remains fixed for the duration of the entity ’ s existence . Not all entities may be visible to each agent , so we define a binary observability mask : µ ( sa , se ) ∈ { 1 , 0 } , where agents can always observe themselves µ ( sa , sa ) = 1 , ∀a ∈ A . Thus , an agent ’ s observation is defined as oa = { se|µ ( sa , se ) = 1 , e ∈ E } ∈ O . Each agent a can execute actions ua , and the joint action of all agents is denoted as u = { ua|a ∈ A } ∈ U. P is the state transition function which defines the probability P ( s′|s , u ) . r ( s , u ) is the reward function which maps the global state and joint actions to a single scalar reward . We do not consider entities being added during an episode , but they may become inactive ( e.g. , a unit dying in StarCraft ) in which case they no longer affect transitions and rewards . Since s and u are sets , their ordering does not matter , and our modeling construct should account for this ( e.g. , by modeling with permutation invariance/equivariance ( Lee et al. , 2019 ) ) . In many domains , the set of entity types present { φe|e ∈ E } is fixed across episodes . We are particularly interested in cases where quantity and types of entities are varied between episodes , as identifying independence relationships between entities is crucial to generalizing experience effectively in these cases . Learning for Dec-POMDPs We aim to learn a set of policies that maximize expected discounted reward ( returns ) in some MDP.Q-learning is specifically concerned with learning an accurate actionvalue function Qtot ( defined below ) , and using this function to select the actions that maximize expected returns . The optimal Q-function for the Dec-POMDP setting is defined as : Qtot ( s , u ) : = E [ ∞∑ t=0 γt r ( st , ut ) ∣∣∣ s0=s , u0=u , st+1∼P ( ·|st , ut ) ut+1=arg maxQ tot ( st+1 , · ) ] = r ( s , u ) + γ E [ maxQtot ( s′ , · ) | s′∼P ( ·|s , u ) ] . ( 1 ) Partial observability is typically handled by using the history of actions and observations as a proxy for state , typically processed by a recurrent neural network ( RNN , Hausknecht & Stone , 2015 ) : Qtotθ ( τt , ut ) ≈ Qtot ( st , ut ) , where the trajectory ( i.e. , action observation history ) is τat : = ( oa0 , u a 0 , . . . , o a t ) and τt : = { τat } a∈A . Work in deep reinforcement learning ( Mnih et al. , 2015 ) has popularized the use of neural networks as function approximators for learning Q-functions that are trained by minimizing the loss function : L ( θ ) : = E [ ( rt + γQ tot θ̄ ( τt+1 , arg maxQ tot θ ( τt+1 , · ) ) ︸ ︷︷ ︸ ytott −Qtotθ ( τt , ut ) ) 2∣∣∣ ( τt , ut , rt , τt+1 ) ∼ D ] , ( 2 ) where θ̄ are the parameters of a target network that is copied from θ periodically to improve stability ( Mnih et al. , 2015 ) and D is a replay buffer ( Lin , 1992 ) that stores transitions collected by an exploratory policy ( typically -greedy ) . Double deep Q-learning ( van Hasselt et al. , 2016 ) mitigates overestimation of the learned values by using actions that maximize Qtotθ for the target network Q tot θ̄ . Value Function Factorization Centralized training for decentralized execution ( CTDE ) has been a major focus in recent efforts in deep multi-agent RL ( Lowe et al. , 2017 ; Foerster et al. , 2018 ; Sunehag et al. , 2018 ; Rashid et al. , 2018 ; Iqbal & Sha , 2019 ) . Some work achieves CTDE by introducing methods for factoring Q-functions into monotonic combinations of per-agent utilities , with each depending only on a single agent ’ s history of actions and observations Qa ( τa , ua ) . This factorization allows agents to independently maximize their local utility functions in a decentralized manner with their selected actions combining to form the optimal joint action . This factored representation can only represent a limited subset of all possible value functions ( Böhmer et al. , 2020 ) ; however , these methods tend to perform better empirically than those that learn unfactored joint action value functions , most likely because they exploit independence properties among agents ( Oliehoek et al. , 2008 ) . Sunehag et al . ( 2018 ) introduce value decomposition networks ( VDN ) which decompose the total Q-value as a sum of per-agent utilities : Qtot ( τ , u ) : = ∑ aQ a ( τa , ua ) . QMIX ( Rashid et al. , 2018 ) extends this approach to use a more expressive factorization . We describe QMIX and how we build our randomized factorization approach on top of it in Section 3.1 . Attention Mechanisms for MARL Attention models have recently generated intense interest due to their ability to incorporate information across large contexts , including in the MARL literature ( Jiang & Lu , 2018 ; Iqbal & Sha , 2019 ; Long et al. , 2020 ) . Importantly for our purposes , they are able to process variable sized sets of fixed length vectors ( in our case entities ) . At the core of these models is a parameterized transformation known as multi-head attention ( Vaswani et al. , 2017 ) . This transformation allows entities to selectively extract information from other entities based on their local context . We define X as a matrix where each row corresponds to an entity ( either its state representation or a transformed representation of it ) . The global state s can be represented in matrix form as XE where Xe , ∗ = se . Our models consist of entity-wise feedforward layers ( denoted as eFF ( X ) ) and multi-head attention layers ( denoted as MHA ( A , X , M ) ) . Entity-wise feedforward layers apply an identical linear transformation to all input entities . Multi-head attention layers serve as a mechanism to integrate information across entities . These take in three arguments : the set of agents for which to compute an output vector A , the matrix X ∈ R|E|×d where d is the dimensionality of the input representations , and a maskM ∈ R|A|×|E| . The layer outputs a matrixH ∈ R|A|×h where h is the hidden dimension of the layer . The rowHa , ∗ corresponds to a weighted sum of linearly transformed representations from all entities selected by agent a . Importantly , if the entry of the maskMa , e = 0 , then entity e ’ s representation can not be included in Ha , ∗ . Masking serves two important purposes for us : 1 ) It enables decentralized execution by providing the mask Mµa , e = µ ( s a , se ) , such that agents can only see entities observable by them in the environment , and 2 ) It enable us to “ imagine ” the returns among sub-groups of entities . We integrate entity-wise feedforward layers and multi- head attention into QMIX in order to adapt it to settings where the number of agents and entities is variable and build our approach from there . The exact process of computing attention layers , as well as the specifics of our attention-augmented version of QMIX are described in detail in the Appendix . 3 RANDOMIZED ENTITY-WISE FACTORIZATION FOR IMAGINED LEARNING We now introduce our method , Randomized Entity-wise Factorization for Imagined Learning ( REFIL ) . As discussed in Section 2 , value function factorization approaches for cooperative deep MARL are motivated by their ability to exploit independence between agents while enabling decentralized execution with centralized training . We note that an agent ’ s choice of optimal actions is often independent of a subset of its observed entities ( cf . soccer breakaway example from Section 1 ) , in addition to the choice of other agents ’ actions . Furthermore , we conjecture that agents robust to irrelevant entities should be more effective in dynamic environments with variable numbers of agents , as they are better able to identify shared patterns of behavior ( e.g. , breakaways exist in all forms of soccer ) . We do not know a priori which entities an agent can disregard , so we must consider all possible sub-groups of entities . As such , we propose to factor value functions by imagining returns in random sub-groups .
This paper proposes an observation factorization method to avoid the influence of the irrelevant part on value estimation. Specifically, they design an entity-wise attention network with a masking procedure. This network is used to filter the irrelevant part of the original observation of each agent. Then the output is used to estimate the individual q-value, as well as input to the mixing network to generate the Q_tot. Two kinds of Q_tot are trained together by combing two loss functions linearly with a hyper-parameter. Experimental results show REFIL combined with QMIX surpasses vanilla QMIX and VDN in several SMAC scenarios.
SP:abd06100a48b4b5467cd63cd024d6387d163f96a
Randomized Entity-wise Factorization for Multi-Agent Reinforcement Learning
1 INTRODUCTION Many real-world multi-agent tasks contain scenarios in which an agent must deal with varying numbers and/or types of cooperative agents , antagonist enemies or other entities . Agents , however , can often select their optimal actions while ignoring a subset of agents/entities . For example , in the sport of soccer , a “ breakaway ” occurs when an attacker with the ball passes the defense and only needs to beat the goalkeeper in order to score ( see Figure 1 ) . In this situation , only the opposing goalkeeper is immediately relevant to the attacker ’ s success , so the attacker can safely ignore players other than the goalkeeper for the time being . By ignoring irrelevant context , the attacker can generalize this experience better to its next breakaway . Furthermore , soccer takes many forms , from casual 5 vs. 5 to full scale 11 vs. 11 matches , and breakaways occur in all . If agents can identify independent patterns of behavior such as breakaways , they should be able to learn more efficiently as well as share their experiences across all forms of soccer . Value function factoring approaches attempt to leverage independences between agents , such as those in our soccer example , by learning value functions as a combination of independent factors that depend on disjunct subsets of the state and action spaces ( Koller & Parr , 1999 ) . These subsets are typically fixed in advance using domain knowledge about the problem at hand , and thus are not scalable to complex domains where dependencies are unknown and may shift over time . Recent approaches in cooperative deep multi-agent reinforcement learning ( MARL ) factor value functions into separate components for each agent ’ s action and observation space in order to enable decentralized execution ( e.g. , VDN ( Sunehag et al. , 2018 ) , QMIX ( Rashid et al. , 2018 ) ) . These approaches learn a utility function for each agent that only depends on the agent ’ s own action and its observations . The global Q-value is then predicted as some monotonic combination of these utilities in order to allow agents to greedily select their actions with local information while maximizing the global Q . These approaches are able to effectively leverage independence between agents ’ local actions and observations , however , we note that observable entities are provided by the environment and are not all necessarily relevant to an agent ’ s value function . We build on these recent approaches by additionally factoring the observation space of each agent into factors for sub-groups of observed entities . Unlike classic works which factor the state or observation spaces , our work does not depend on fixed subsets of features designated through domain knowledge . Instead , we propose to randomly select sub-groups of observed entities and “ imagine ” the predicted utilities within these groups for each agent . These terms will not account for potential interactions outside of the groups , so we include additional factors that estimate the effect of the entities outside of each sub-group on each agent ’ s utility . In order to estimate the true returns , we combine all factors using a mixing network ( as in QMIX , Rashid et al. , 2018 ) , which allows our model to weight factors based on the full state context . We hypothesize this approach is beneficial for two reasons : 1 ) randomly partitioning entities and predicting returns from disjunct factors allows our model to explore all possible independence relationships among agents and entities , teaching agents to ignore irrelevant context when possible and 2 ) by teaching our models when they can ignore irrelevant context , they will learn more efficiently across varied settings that share common patterns of behavior , such as breakaways in soccer . The loss for training randomized factorization is added to the QMIX loss ( i.e. , using full observations ) as an auxiliary objective . Our reasoning is again twofold : 1 ) we must learn the true returns to use as a target prediction for a Q-learning loss . 2 ) we do not know a priori which entities are unnecessary and thus need to learn policies that act on full observations . Our entity-wise factoring procedure can be implemented easily in practice by using a simple masking procedure in attention-based models . Furthermore , by leveraging attention models , we can apply our approach to domains with varying entity quantities . Just as a soccer agent experiencing a breakaway can generalize their behavior across settings ( 5 vs. 5 , 11 vs. 11 , etc . ) if they ignore irrelevant context , we hypothesize that our approach will improve performance across settings with variable agent and entity configurations . We propose Randomized Entity-wise Factorization for Imagined Learning ( REFIL ) and test on complex StarCraft Multi-Agent Challenge ( SMAC ) ( Samvelyan et al. , 2019 ) tasks with varying agent types and quantities , finding it attains improved performance over state-of-the-art methods . 2 BACKGROUND AND PRELIMINARIES . In this work , we consider the decentralized partially observable Markov decision process ( DecPOMDP ) ( Oliehoek et al. , 2016 ) , which describes fully cooperative multi-agent tasks . Specifically , we utilize the setting of Dec-POMDPs with entities ( Schroeder de Witt et al. , 2019 ) . Dec-POMDPs with Entities are described as tuples : ( S , U , O , P , r , E , A , Φ , µ ) . E is the set of entities in the environment . Each entity e has a state representation se , and the global state is the set s = { se|e ∈ E } ∈ S. Some entities can be agents a ∈ A ⊆ E . Non-agent entities are parts of the environment that are not controlled by learning policies ( e.g. , landmarks , obstacles , agents with fixed behavior ) . The state features of each entity comprise of two parts : se = [ fe , φe ] where fe represents the description of an entity ’ s current state ( e.g. , position , orientation , velocity , etc . ) while φe ∈ Φ represents the entity ’ s type ( e.g. , outfield player , goalkeeper , etc . ) , of which there are a discrete set . An entity ’ s type affects the state dynamics as well as the reward function and , importantly , it remains fixed for the duration of the entity ’ s existence . Not all entities may be visible to each agent , so we define a binary observability mask : µ ( sa , se ) ∈ { 1 , 0 } , where agents can always observe themselves µ ( sa , sa ) = 1 , ∀a ∈ A . Thus , an agent ’ s observation is defined as oa = { se|µ ( sa , se ) = 1 , e ∈ E } ∈ O . Each agent a can execute actions ua , and the joint action of all agents is denoted as u = { ua|a ∈ A } ∈ U. P is the state transition function which defines the probability P ( s′|s , u ) . r ( s , u ) is the reward function which maps the global state and joint actions to a single scalar reward . We do not consider entities being added during an episode , but they may become inactive ( e.g. , a unit dying in StarCraft ) in which case they no longer affect transitions and rewards . Since s and u are sets , their ordering does not matter , and our modeling construct should account for this ( e.g. , by modeling with permutation invariance/equivariance ( Lee et al. , 2019 ) ) . In many domains , the set of entity types present { φe|e ∈ E } is fixed across episodes . We are particularly interested in cases where quantity and types of entities are varied between episodes , as identifying independence relationships between entities is crucial to generalizing experience effectively in these cases . Learning for Dec-POMDPs We aim to learn a set of policies that maximize expected discounted reward ( returns ) in some MDP.Q-learning is specifically concerned with learning an accurate actionvalue function Qtot ( defined below ) , and using this function to select the actions that maximize expected returns . The optimal Q-function for the Dec-POMDP setting is defined as : Qtot ( s , u ) : = E [ ∞∑ t=0 γt r ( st , ut ) ∣∣∣ s0=s , u0=u , st+1∼P ( ·|st , ut ) ut+1=arg maxQ tot ( st+1 , · ) ] = r ( s , u ) + γ E [ maxQtot ( s′ , · ) | s′∼P ( ·|s , u ) ] . ( 1 ) Partial observability is typically handled by using the history of actions and observations as a proxy for state , typically processed by a recurrent neural network ( RNN , Hausknecht & Stone , 2015 ) : Qtotθ ( τt , ut ) ≈ Qtot ( st , ut ) , where the trajectory ( i.e. , action observation history ) is τat : = ( oa0 , u a 0 , . . . , o a t ) and τt : = { τat } a∈A . Work in deep reinforcement learning ( Mnih et al. , 2015 ) has popularized the use of neural networks as function approximators for learning Q-functions that are trained by minimizing the loss function : L ( θ ) : = E [ ( rt + γQ tot θ̄ ( τt+1 , arg maxQ tot θ ( τt+1 , · ) ) ︸ ︷︷ ︸ ytott −Qtotθ ( τt , ut ) ) 2∣∣∣ ( τt , ut , rt , τt+1 ) ∼ D ] , ( 2 ) where θ̄ are the parameters of a target network that is copied from θ periodically to improve stability ( Mnih et al. , 2015 ) and D is a replay buffer ( Lin , 1992 ) that stores transitions collected by an exploratory policy ( typically -greedy ) . Double deep Q-learning ( van Hasselt et al. , 2016 ) mitigates overestimation of the learned values by using actions that maximize Qtotθ for the target network Q tot θ̄ . Value Function Factorization Centralized training for decentralized execution ( CTDE ) has been a major focus in recent efforts in deep multi-agent RL ( Lowe et al. , 2017 ; Foerster et al. , 2018 ; Sunehag et al. , 2018 ; Rashid et al. , 2018 ; Iqbal & Sha , 2019 ) . Some work achieves CTDE by introducing methods for factoring Q-functions into monotonic combinations of per-agent utilities , with each depending only on a single agent ’ s history of actions and observations Qa ( τa , ua ) . This factorization allows agents to independently maximize their local utility functions in a decentralized manner with their selected actions combining to form the optimal joint action . This factored representation can only represent a limited subset of all possible value functions ( Böhmer et al. , 2020 ) ; however , these methods tend to perform better empirically than those that learn unfactored joint action value functions , most likely because they exploit independence properties among agents ( Oliehoek et al. , 2008 ) . Sunehag et al . ( 2018 ) introduce value decomposition networks ( VDN ) which decompose the total Q-value as a sum of per-agent utilities : Qtot ( τ , u ) : = ∑ aQ a ( τa , ua ) . QMIX ( Rashid et al. , 2018 ) extends this approach to use a more expressive factorization . We describe QMIX and how we build our randomized factorization approach on top of it in Section 3.1 . Attention Mechanisms for MARL Attention models have recently generated intense interest due to their ability to incorporate information across large contexts , including in the MARL literature ( Jiang & Lu , 2018 ; Iqbal & Sha , 2019 ; Long et al. , 2020 ) . Importantly for our purposes , they are able to process variable sized sets of fixed length vectors ( in our case entities ) . At the core of these models is a parameterized transformation known as multi-head attention ( Vaswani et al. , 2017 ) . This transformation allows entities to selectively extract information from other entities based on their local context . We define X as a matrix where each row corresponds to an entity ( either its state representation or a transformed representation of it ) . The global state s can be represented in matrix form as XE where Xe , ∗ = se . Our models consist of entity-wise feedforward layers ( denoted as eFF ( X ) ) and multi-head attention layers ( denoted as MHA ( A , X , M ) ) . Entity-wise feedforward layers apply an identical linear transformation to all input entities . Multi-head attention layers serve as a mechanism to integrate information across entities . These take in three arguments : the set of agents for which to compute an output vector A , the matrix X ∈ R|E|×d where d is the dimensionality of the input representations , and a maskM ∈ R|A|×|E| . The layer outputs a matrixH ∈ R|A|×h where h is the hidden dimension of the layer . The rowHa , ∗ corresponds to a weighted sum of linearly transformed representations from all entities selected by agent a . Importantly , if the entry of the maskMa , e = 0 , then entity e ’ s representation can not be included in Ha , ∗ . Masking serves two important purposes for us : 1 ) It enables decentralized execution by providing the mask Mµa , e = µ ( s a , se ) , such that agents can only see entities observable by them in the environment , and 2 ) It enable us to “ imagine ” the returns among sub-groups of entities . We integrate entity-wise feedforward layers and multi- head attention into QMIX in order to adapt it to settings where the number of agents and entities is variable and build our approach from there . The exact process of computing attention layers , as well as the specifics of our attention-augmented version of QMIX are described in detail in the Appendix . 3 RANDOMIZED ENTITY-WISE FACTORIZATION FOR IMAGINED LEARNING We now introduce our method , Randomized Entity-wise Factorization for Imagined Learning ( REFIL ) . As discussed in Section 2 , value function factorization approaches for cooperative deep MARL are motivated by their ability to exploit independence between agents while enabling decentralized execution with centralized training . We note that an agent ’ s choice of optimal actions is often independent of a subset of its observed entities ( cf . soccer breakaway example from Section 1 ) , in addition to the choice of other agents ’ actions . Furthermore , we conjecture that agents robust to irrelevant entities should be more effective in dynamic environments with variable numbers of agents , as they are better able to identify shared patterns of behavior ( e.g. , breakaways exist in all forms of soccer ) . We do not know a priori which entities an agent can disregard , so we must consider all possible sub-groups of entities . As such , we propose to factor value functions by imagining returns in random sub-groups .
This paper proposes to incorporate a masked attention mechanism in QMIX for value function factorization to disentangle value predictions from irrelevant agents/entities. The masking is based on a random sampling from the whole set of agents to from random subsets, based on which it can compute within-group and without-group Q-functions. The method is able to handle varying types and number of agents. The paper conducts experiments on a simple game to understand the effect, and then test on 3 SMAC games, which shows the effectiveness of the proposed REFIL method.
SP:abd06100a48b4b5467cd63cd024d6387d163f96a
RSO: A Gradient Free Sampling Based Approach For Training Deep Neural Networks
1 INTRODUCTION . Deep neural networks solve a variety of problems using multiple layers to progressively extract higher level features from the raw input . The commonly adopted method to train deep neural networks is backpropagation ( Rumelhart et al . ( 1985 ) ) and it has been around for the past 35 years . Backpropagation assumes that the function is differentiable and leverages the partial derivative w.r.t the weight wi for minimizing the function f ( x , w ) as follows , wi+1 = wi − η∇f ( x , w ) ∆wi , where η is the learning rate . Also , the method is efficient as it makes a single functional estimate to update all the weights of the network . As in , the partial derivative for some weight wj , where j 6= i would change once wi is updated , still this change is not factored into the weight update rule for wj . Moreover , it may not even be optimal for all weights to move in the same direction as obtained from the gradients in the previous layer . Although deep neural networks are non-convex ( and the weight update rule measures approximate gradients ) , this update rule works surprisingly well in practice . To explain the above observation , recent literature ( Du et al . ( 2019 ) ; Li & Liang ( 2018 ) ) argues that because the network is over-parametrized , the initial set of weights are very close to the final solution and even a little bit of nudging using gradient descent around the initialization point leads to a very good solution . We take this argument to another extreme - instead of using gradient based optimizers - which provide strong direction and magnitude signals for updating the weights ; we explore the region around the initialization point by sampling weight changes to minimize the objective function . Formally , our weight update rule is wi+1 = { wi , f ( x , wi ) < = f ( x , wi + ∆wi ) wi + ∆wi , f ( x , wi ) > f ( x , wi + ∆wi ) , where ∆wi is the weight change hypothesis . Here , we explicitly test the region around the initial set of weights by computing the function and update a weight if it minimizes the loss , see Fig . 1 . Surprisingly , our experiments demonstrate that the above update rule requires fewer weight updates compared to backpropagation to find good minimizers for deep neural networks , strongly suggesting that just exploring regions around randomly initialized networks is sufficient , even without explicit gradient computation . We evaluate this weight update scheme ( called RSO ; random search optimization ) on classification datasets like MNIST and CIFAR-10 with deep convolutional neural networks ( 6-10 layers ) and obtain competitive accuracy numbers . For example , RSO obtains 99.1 % accuracy on MNIST and 81.8 % accuracy on CIFAR-10 using just the random search optimization algorithm . We do not use any other optimizers for optimizing the final classification layer . Although RSO is computationally expensive ( because it requires updates which are linear in the number of network parameters ) , our hope is that as we develop better intuition about structural properties of deep neural networks , we will be able to accelerate RSO ( using Hebbian principles , Gabor filters , depth-wise convolutions ) . If the number of trainable parameters are reduced drastically ( Frankle et al . ( 2020 ) ) , search based methods could be a viable alternative to back-propagation . Furthermore , since architectural innovations which have happened over the past decade use backpropagation by default , a different optimization algorithm could potentially lead to a different class of architectures , because minimizers of an objective function via different greedy optimizers could potentially be different . 2 RELATED WORK . Multiple optimization techniques have been proposed for training deep neural networks . When gradient based methods were believed to get stuck in local minima with random initialization , layer wise training was popular for optimizing deep neural networks ( Hinton et al . ( 2006 ) ; Bengio et al . ( 2007 ) ) using contrastive methods ( Hinton ( 2002 ) ) . In a similar spirit , recent work , Greedy InfoMax by Löwe et al . ( 2019 ) maximizes mutual information between adjacent layers instead of training a network end to end . Taylor et al . ( 2016 ) finds the weights of each layer independently by solving a sequence of optimization problems which can be solved globally in closed form . Weight perturbation ( Werfel et al . ( 2004 ) ) based methods have been used for approximate gradients estimation in situations where gradient estimation is expensive . However , these training methods do not generalize to deep neural networks which have more than 2-3 layers and its not shown that the performance increases as we make the network deeper . Hence , back-propagation with SGD or other gradient based optimizers ( Duchi et al . ( 2011 ) ; Sutskever et al . ( 2013 ) ; Kingma & Ba ( 2014 ) ) are commonly used for optimizing deep neural networks . Recently , multiple works have proposed that because these networks are heavily over-parametrized , the initial set of random filters is already close to the final solution and gradient based optimizers only nudge the parameters to obtain the final solution ( Du et al . ( 2019 ) ; Li & Liang ( 2018 ) ) . For example , only training batch-norm parameters and keeping the random filters fixed can obtain very good results with heavily parametrized very deep neural networks ( > 800 layers ) as shown in Frankle et al . ( 2020 ) . It was also shown that networks can be trained by just masking out some weights without modifying the original set of weights by Ramanujan et al . ( 2020 ) - although one can argue that masking is a very powerful operator and can be used to represent an exponential number of output spaces . The network pruning literature covers more on optimizing subsets of an over-parametrized randomly initialized neural network ( Frankle & Carbin ( 2019 ) ; Li et al . ( 2017 ) ) . Our method , RSO , is also based on the hypothesis that the initial set of weights is close to the final solution . Here we show that gradient based optimizers may not even be necessary for training deep networks and when starting from randomly initialized weights , even search based algorithms can be a feasible option . Recently , search based algorithms have gained traction in the deep learning community . Since the design space of network architectures is huge , search based techniques are used to explore the placement of different neural modules to find better design spaces which lead to better accuracy ( Zoph & Le ( 2016 ) ; Liu et al . ( 2018 ) ) . This is done at a block level and each network is still trained with gradient descent . Similar to NAS based methods , weight agnostic neural networks ( WANN ) ( Gaier & Ha ( 2019 ) ) also searches for architectures , but uses a fixed set of weight values [ −2 , −1 , −0.5 , +0.5 , +1 , +2 ] . WANN operates at a much finer granularity as compared to NAS based methods while searching for connections between neurons and does not use gradient descent for optimization . Algorithms like Deep Neuroevolution by Such et al . ( 2017 ) and evolution strategies ( ES ) by Salimans et al . ( 2017 ) are search based optimization algorithms which have been used for training neural networks for reinforcement learning . ES is comprehensively reviewed by Beyer & Schwefel ( 2004 ) . Both Deep Neuroevolution and Salimans et al . ( 2017 ) create multiple replicas ( children ) of an initial neural network by adding small perturbations to all the weight parameters and then update the parameters by either selecting the best candidate or by performing a weighted average based on the reward . Both the methods update all the parameters of the network in each update . The problem with changing all the weights is that updating all the parameters of the network at once leads to random directions which are unlikely to contain a direction which will minimize the objective function and slows down learning ( results shown in Section 4.5 ) . Also , these methods were only trained on networks with 2-3 hidden layers , which is fairly shallow when compared to modern deep architectures . 3 APPROACH . Consider a deep neural network with D layers , where the weights of a layer d with nd neurons is represented byWd = { w1 , w2 .. , wid , .. wnd } . For an input activationAd−1 = { a1 , a2 , ... and−1 } , Wd generates an activation Ad = { a1 , a2 , ... and } , see Fig 2 . Each weight tensor wid ∈ Wd generates an activation ai ∈ Ad , where ai can be a scalar or a tensor depending on whether the layer is fully connected , convolutional , recurrent , batch-norm etc . The objective of the training process is to find the best set of weights W , which minimize a loss function F ( X , L ; W ) given some input data X and labels L. To this end , we initialize the weights of the network with a Gaussian distribution N ( 0 , √ 2/|wid | ) , like He et al . ( 2015 ) . The input data is also normalized to have zero mean and unit standard deviation . Once the weights of all layers are initialized , we compute the standard deviation σd of all elements in the weight tensor Wd . In the weight update step for a weight wj ∈ wid , ∆wj is sampled from ∼ N ( 0 , σd ) . We call this ∆Wj which is zero for all weights of the network but for wj , where j ∈ id . For a randomly sampled mini-batch ( x , l ) ∈ X , L , we compute the loss F ( x , l ; W ) for W , W + ∆Wj and W −∆Wj . If adding or subtracting ∆Wj reduces F , W is updated , otherwise the original weight is retained . This process is repeated for all the weights in the network , i.e. , to update all the weights of the network once , F needs to be computed three times the number of weight parameters in the network , 3|W | . We first update the weights of the layer closest to the labels and then sequentially move closer to the input . This is typically faster than optimizing the other way , but both methods lead to good results . This algorithm is described in Algorithm 1 . In Algorithm 1 , in line 12 , we sample change in weights from a Gaussian distribution whose standard deviation is the same as the standard deviation of the layer . This is to ensure that the change in weights is within a small range . The Gaussian sampling can also be replaced with other distributions like uniform sampling from ( −2σd , 2σd ) or just sampling values from a template like ( −σd , 0 , σd ) and these would also be effective in practice . The opposite direction of a randomly sampled weight is also tested because often it leads to a better hypothesis when one direction does not decrease the loss . However , in quite a few cases ( close to 10 % as per our experiments ) , not changing the weight at all is better . Note that there is no concept of learning rate in this algorithm . We also do not normalize the loss if the batch size increases or decreases as the weight update step is independent of the magnitude of the loss . There is widespread belief in the literature that randomly initialized deep neural networks are already close to the final solution ( Du et al . ( 2019 ) ; Li & Liang ( 2018 ) ) . Hence , we use this prior and explore regions using bounded step sizes ( N ( 0 , σd ) ) in a single dimension . We chose to update one weight at a time instead of sampling all the weights of the network as this would require sampling an exponential number of samples to estimate their joint distribution . RSO will be significantly faster even if prior knowledge about the distribution of the weights of individual neurons is used .
The paper proposes an RSO (random search optimization) method for training deep neural networks. This method is gradient-free and based on the Markov Chain Monte Carlo search. In particular, it adds a perturbation to weight in a deep neural network and tests if it reduces the loss on a mini-batch: the weight is updated if this reduces the loss, and retained otherwise.
SP:30fffab6fd645af0a4f8b9c96d5535f11469667e
RSO: A Gradient Free Sampling Based Approach For Training Deep Neural Networks
1 INTRODUCTION . Deep neural networks solve a variety of problems using multiple layers to progressively extract higher level features from the raw input . The commonly adopted method to train deep neural networks is backpropagation ( Rumelhart et al . ( 1985 ) ) and it has been around for the past 35 years . Backpropagation assumes that the function is differentiable and leverages the partial derivative w.r.t the weight wi for minimizing the function f ( x , w ) as follows , wi+1 = wi − η∇f ( x , w ) ∆wi , where η is the learning rate . Also , the method is efficient as it makes a single functional estimate to update all the weights of the network . As in , the partial derivative for some weight wj , where j 6= i would change once wi is updated , still this change is not factored into the weight update rule for wj . Moreover , it may not even be optimal for all weights to move in the same direction as obtained from the gradients in the previous layer . Although deep neural networks are non-convex ( and the weight update rule measures approximate gradients ) , this update rule works surprisingly well in practice . To explain the above observation , recent literature ( Du et al . ( 2019 ) ; Li & Liang ( 2018 ) ) argues that because the network is over-parametrized , the initial set of weights are very close to the final solution and even a little bit of nudging using gradient descent around the initialization point leads to a very good solution . We take this argument to another extreme - instead of using gradient based optimizers - which provide strong direction and magnitude signals for updating the weights ; we explore the region around the initialization point by sampling weight changes to minimize the objective function . Formally , our weight update rule is wi+1 = { wi , f ( x , wi ) < = f ( x , wi + ∆wi ) wi + ∆wi , f ( x , wi ) > f ( x , wi + ∆wi ) , where ∆wi is the weight change hypothesis . Here , we explicitly test the region around the initial set of weights by computing the function and update a weight if it minimizes the loss , see Fig . 1 . Surprisingly , our experiments demonstrate that the above update rule requires fewer weight updates compared to backpropagation to find good minimizers for deep neural networks , strongly suggesting that just exploring regions around randomly initialized networks is sufficient , even without explicit gradient computation . We evaluate this weight update scheme ( called RSO ; random search optimization ) on classification datasets like MNIST and CIFAR-10 with deep convolutional neural networks ( 6-10 layers ) and obtain competitive accuracy numbers . For example , RSO obtains 99.1 % accuracy on MNIST and 81.8 % accuracy on CIFAR-10 using just the random search optimization algorithm . We do not use any other optimizers for optimizing the final classification layer . Although RSO is computationally expensive ( because it requires updates which are linear in the number of network parameters ) , our hope is that as we develop better intuition about structural properties of deep neural networks , we will be able to accelerate RSO ( using Hebbian principles , Gabor filters , depth-wise convolutions ) . If the number of trainable parameters are reduced drastically ( Frankle et al . ( 2020 ) ) , search based methods could be a viable alternative to back-propagation . Furthermore , since architectural innovations which have happened over the past decade use backpropagation by default , a different optimization algorithm could potentially lead to a different class of architectures , because minimizers of an objective function via different greedy optimizers could potentially be different . 2 RELATED WORK . Multiple optimization techniques have been proposed for training deep neural networks . When gradient based methods were believed to get stuck in local minima with random initialization , layer wise training was popular for optimizing deep neural networks ( Hinton et al . ( 2006 ) ; Bengio et al . ( 2007 ) ) using contrastive methods ( Hinton ( 2002 ) ) . In a similar spirit , recent work , Greedy InfoMax by Löwe et al . ( 2019 ) maximizes mutual information between adjacent layers instead of training a network end to end . Taylor et al . ( 2016 ) finds the weights of each layer independently by solving a sequence of optimization problems which can be solved globally in closed form . Weight perturbation ( Werfel et al . ( 2004 ) ) based methods have been used for approximate gradients estimation in situations where gradient estimation is expensive . However , these training methods do not generalize to deep neural networks which have more than 2-3 layers and its not shown that the performance increases as we make the network deeper . Hence , back-propagation with SGD or other gradient based optimizers ( Duchi et al . ( 2011 ) ; Sutskever et al . ( 2013 ) ; Kingma & Ba ( 2014 ) ) are commonly used for optimizing deep neural networks . Recently , multiple works have proposed that because these networks are heavily over-parametrized , the initial set of random filters is already close to the final solution and gradient based optimizers only nudge the parameters to obtain the final solution ( Du et al . ( 2019 ) ; Li & Liang ( 2018 ) ) . For example , only training batch-norm parameters and keeping the random filters fixed can obtain very good results with heavily parametrized very deep neural networks ( > 800 layers ) as shown in Frankle et al . ( 2020 ) . It was also shown that networks can be trained by just masking out some weights without modifying the original set of weights by Ramanujan et al . ( 2020 ) - although one can argue that masking is a very powerful operator and can be used to represent an exponential number of output spaces . The network pruning literature covers more on optimizing subsets of an over-parametrized randomly initialized neural network ( Frankle & Carbin ( 2019 ) ; Li et al . ( 2017 ) ) . Our method , RSO , is also based on the hypothesis that the initial set of weights is close to the final solution . Here we show that gradient based optimizers may not even be necessary for training deep networks and when starting from randomly initialized weights , even search based algorithms can be a feasible option . Recently , search based algorithms have gained traction in the deep learning community . Since the design space of network architectures is huge , search based techniques are used to explore the placement of different neural modules to find better design spaces which lead to better accuracy ( Zoph & Le ( 2016 ) ; Liu et al . ( 2018 ) ) . This is done at a block level and each network is still trained with gradient descent . Similar to NAS based methods , weight agnostic neural networks ( WANN ) ( Gaier & Ha ( 2019 ) ) also searches for architectures , but uses a fixed set of weight values [ −2 , −1 , −0.5 , +0.5 , +1 , +2 ] . WANN operates at a much finer granularity as compared to NAS based methods while searching for connections between neurons and does not use gradient descent for optimization . Algorithms like Deep Neuroevolution by Such et al . ( 2017 ) and evolution strategies ( ES ) by Salimans et al . ( 2017 ) are search based optimization algorithms which have been used for training neural networks for reinforcement learning . ES is comprehensively reviewed by Beyer & Schwefel ( 2004 ) . Both Deep Neuroevolution and Salimans et al . ( 2017 ) create multiple replicas ( children ) of an initial neural network by adding small perturbations to all the weight parameters and then update the parameters by either selecting the best candidate or by performing a weighted average based on the reward . Both the methods update all the parameters of the network in each update . The problem with changing all the weights is that updating all the parameters of the network at once leads to random directions which are unlikely to contain a direction which will minimize the objective function and slows down learning ( results shown in Section 4.5 ) . Also , these methods were only trained on networks with 2-3 hidden layers , which is fairly shallow when compared to modern deep architectures . 3 APPROACH . Consider a deep neural network with D layers , where the weights of a layer d with nd neurons is represented byWd = { w1 , w2 .. , wid , .. wnd } . For an input activationAd−1 = { a1 , a2 , ... and−1 } , Wd generates an activation Ad = { a1 , a2 , ... and } , see Fig 2 . Each weight tensor wid ∈ Wd generates an activation ai ∈ Ad , where ai can be a scalar or a tensor depending on whether the layer is fully connected , convolutional , recurrent , batch-norm etc . The objective of the training process is to find the best set of weights W , which minimize a loss function F ( X , L ; W ) given some input data X and labels L. To this end , we initialize the weights of the network with a Gaussian distribution N ( 0 , √ 2/|wid | ) , like He et al . ( 2015 ) . The input data is also normalized to have zero mean and unit standard deviation . Once the weights of all layers are initialized , we compute the standard deviation σd of all elements in the weight tensor Wd . In the weight update step for a weight wj ∈ wid , ∆wj is sampled from ∼ N ( 0 , σd ) . We call this ∆Wj which is zero for all weights of the network but for wj , where j ∈ id . For a randomly sampled mini-batch ( x , l ) ∈ X , L , we compute the loss F ( x , l ; W ) for W , W + ∆Wj and W −∆Wj . If adding or subtracting ∆Wj reduces F , W is updated , otherwise the original weight is retained . This process is repeated for all the weights in the network , i.e. , to update all the weights of the network once , F needs to be computed three times the number of weight parameters in the network , 3|W | . We first update the weights of the layer closest to the labels and then sequentially move closer to the input . This is typically faster than optimizing the other way , but both methods lead to good results . This algorithm is described in Algorithm 1 . In Algorithm 1 , in line 12 , we sample change in weights from a Gaussian distribution whose standard deviation is the same as the standard deviation of the layer . This is to ensure that the change in weights is within a small range . The Gaussian sampling can also be replaced with other distributions like uniform sampling from ( −2σd , 2σd ) or just sampling values from a template like ( −σd , 0 , σd ) and these would also be effective in practice . The opposite direction of a randomly sampled weight is also tested because often it leads to a better hypothesis when one direction does not decrease the loss . However , in quite a few cases ( close to 10 % as per our experiments ) , not changing the weight at all is better . Note that there is no concept of learning rate in this algorithm . We also do not normalize the loss if the batch size increases or decreases as the weight update step is independent of the magnitude of the loss . There is widespread belief in the literature that randomly initialized deep neural networks are already close to the final solution ( Du et al . ( 2019 ) ; Li & Liang ( 2018 ) ) . Hence , we use this prior and explore regions using bounded step sizes ( N ( 0 , σd ) ) in a single dimension . We chose to update one weight at a time instead of sampling all the weights of the network as this would require sampling an exponential number of samples to estimate their joint distribution . RSO will be significantly faster even if prior knowledge about the distribution of the weights of individual neurons is used .
Instead of back-propagation, the authors consider a randomized search heuristic to train the parameters of neural networks. The proposed work is based on the hypothesis that the initial set of neural network weights is close to the final solution. The authors identify the problem that existing randomized search methods update all the parameters of the network in each update. Thus the proposed method updates only a single weight per iteration. Experimental results on MNIST and CIFAR10 show that the proposed method delivers competitive results.
SP:30fffab6fd645af0a4f8b9c96d5535f11469667e
When does preconditioning help or hurt generalization?
1 INTRODUCTION . We study the generalization property of an estimator θ̂ obtained by minimizing the empirical risk ( or the training error ) L ( fθ ) via a preconditioned gradient update with preconditioner P : θt+1 = θt − ηP ( t ) ∇θtL ( fθt ) , t = 0 , 1 , . . . ( 1.1 ) Setting P = I recovers gradient descent ( GD ) . Choices of P which exploit second-order information include the inverse Fisher information matrix , which gives the natural gradient descent ( NGD ) ( Amari , 1998 ) ; the inverse Hessian , which leads to Newton ’ s method ( LeCun et al. , 2012 ) ; and diagonal matrices estimated from past gradients , which include various adaptive gradient methods ( Duchi et al. , 2011 ; Kingma & Ba , 2014 ) . These preconditioners often alleviate the effect of pathological curvature and speed up optimization , but their impact on generalization has been under debate : Wilson et al . ( 2017 ) reported that in neural network optimization , adaptive or secondorder methods generalize worse compared to gradient descent ( GD ) , whereas other empirical studies showed that second-order methods achieve comparable , if not better generalization ( Xu et al. , 2020 ) . The generalization property of optimizers relates to the discussion of implicit bias ( Gunasekar et al. , 2018a ) , i.e . preconditioning may lead to a different converged solution ( with potentially the same training loss ) , as illustrated in Figure 1 . While many explanations have been proposed , our starting point is the well-known observation that GD often implicitly regularizes the parameter ` 2 norm . For instance in overparameterized least squares regression , GD and many first-order methods find the minimum ` 2 norm solution from zero initialization ( without explicit regularization ) , but preconditioned updates may not . This being said , while the minimum ` 2 norm solution can generalize well ∗Alphabetical ordering . Correspondence to : Denny Wu ( dennywu @ cs.toronto.edu ) . in the overparameterized regime ( Bartlett et al. , 2019 ) , it is unclear whether preconditioning leads to inferior solutions – even in the simple setting of overparameterized linear regression , quantitative understanding of how preconditioning affects generalization is largely lacking . Motivated by the observations above , in Section 3 we start with overparameterized least squares regression ( unregularized ) and analyze the stationary solution ( t→∞ ) of update ( 1.1 ) under time-invariant preconditioner . Extending previous analysis in the proportional limit ( Hastie et al. , 2019 ) , we consider a more general random design setting and derive the exact population risk in its bias-variance decomposition . We characterize the optimal P within a general class of preconditioners for both the bias and variance , and focus on the comparison between GD , for which P is identity , and NGD , for which P is the inverse population Fisher information matrix1 . We find that the comparison of generalization is affected by the following factors : 1 . Label Noise : Additive noise in the labels leads to the variance term in the risk . We prove that NGD achieves the optimal variance among a general class of preconditioned updates . 2 . Model Misspecification : Under misspecification , there does not exist a perfect fθ that recovers the true function ( target ) . We argue that this factor is similar to additional label noise , and thus NGD may also be beneficial when the model is misspecified . 3 . Data-Signal-Alignment : Alignment describes how the target signal distributes among the input features . We show that GD achieves lower bias when signal is isotropic , whereas NGD is preferred under “ misalignment ” — when the target function focuses on small feature directions . Beyond the decomposition of stationary risk , our findings in Section 4 and 5 are summarized as : • In Section 4.1 and 4.2 we discuss how the bias-variance tradeoff can be realized by different choices of preconditioner P ( e.g . interpolating between GD and NGD ) or early stopping . • In Section 4.3 we extend our analysis to regression in the RKHS and show that under early stopping , a preconditioned update interpolating between GD and NGD achieves minimax optimal convergence rate in much fewer steps , and thus reduces the population risk faster than GD . • In Section 5 we empirically test how well our findings in linear model carry over to neural networks : under a student-teacher setup , we compare the generalization of GD with preconditioned updates and illustrate the influence of all aforementioned factors . The performance of neural networks under a variety of manipulations results in trends that align with our theoretical analysis . 2 BACKGROUND AND RELATED WORKS . Natural Gradient Descent . NGD is a second-order method originally proposed in Amari ( 1997 ) . Consider a data distribution p ( x ) on the space X , a function fθ : X → Z parameterized by θ , and a loss function L ( X , fθ ) = 1n ∑n i=1 l ( yi , fθ ( xi ) ) , where l : Y ×Z → R. Also suppose a probability distribution p ( y|z ) = p ( y|fθ ( x ) ) is defined on the space of labels . Then , the natural gradient is defined as : ∇̃θL ( X , fθ ) = F−1∇θL ( X , fθ ) , where F = E [ ∇θ log p ( x , y|θ ) ∇θ log p ( x , y|θ ) > ] is the Fisher information matrix , or simply the ( population ) Fisher . Note that expectations in the Fisher are under the joint distribution of the model p ( x , y|θ ) = p ( x ) p ( y|fθ ( x ) ) . In the literature , the Fisher is sometimes defined under the empirical data distribution { xi } ni=1 ( Amari et al. , 2000 ) . We instead refer to this quantity as the sample Fisher , the properties of which influence optimization and have been studied in Karakida et al . ( 2018 ) ; Kunstner et al . ( 2019 ) ; Thomas et al . ( 2020 ) . We remark that in linear and kernel regression under squared loss , sample Fisher-based updates give the same stationary solution as GD ( see Section 3 ) , whereas population Fisher-based update may not . While the population Fisher is typically difficult to obtain , extra unlabeled data can be used in its estimation , which empirically improves generalization ( Pascanu & Bengio , 2013 ) . Moreover , under structural assumptions , parametric approaches to estimate F can be more sample-efficient ( Martens & Grosse , 2015 ; Ollivier , 2015 ) , and thus closing the gap between sample and population Fisher . 1From now on we use NGD to denote the population Fisher-based update , and we write “ sample NGD ” when P is the inverse or pseudo-inverse of the sample Fisher ; see Section 2 for discussion . When the per-instance loss is the negative log-probability of an exponential family , the sample Fisher coincides with the generalized Gauss-Newton matrix ( Martens , 2014 ) . In least squares regression , which is the focus of this work , the quantity also coincides with the Hessian . We thus take NGD as a representative example of preconditioned update , and we expect our findings to also translate to other second-order methods ( not including adaptive gradient methods ) in regression problems . Analysis of Preconditioned Gradient Descent . While Wilson et al . ( 2017 ) outlined one example under fixed training data where GD generalizes better than adaptive methods , in the online learning setting , for which optimization speed relates to generalization , several works have shown the advantage of preconditioning ( Levy & Duchi , 2019 ; Zhang et al. , 2019a ) . In addition , Zhang et al . ( 2019b ) ; Cai et al . ( 2019 ) established convergence and generalization guarantees of sample Fisher-based updates for neural networks in the kernel regime . Lastly , the generalization of different optimizers relates to the notion of “ sharpness ” ( Keskar et al. , 2016 ; Dinh et al. , 2017 ) , and it has been argued that second-order updates tend to find sharper minima ( Wu et al. , 2018 ) . We note that two concurrent works also discussed the generalization performance of preconditioned updates . Wadia et al . ( 2020 ) connected second-order methods with data whitening in linear models , and qualitatively showed that whitening ( thus second-order update ) harms generalization in certain cases . Vaswani et al . ( 2020 ) analyzed the complexity of the maximum P -margin solution in linear classification problems . We emphasize that instead of upper bounding the risk ( e.g . Rademacher complexity ) , which may not decide the optimal P for generalization error , we compute the exact risk for least squares regression , which allows us to precisely compare different preconditioners . 3 ASYMPTOTIC RISK OF RIDGELESS INTERPOLANTS . In this section we consider the following setup : given n training samples { xi } ni=1 labeled by a teacher model ( target function ) f∗ : Rd→R with additive noise : yi = f∗ ( xi ) + εi , we learn a linear student model fθ by minimizing the squared loss : L ( X , fθ ) = ∑n i=1 ( yi − x > i θ ) 2 . We assume a random design : xi = Σ 1/2 X zi , where zi ∈ Rd is an i.i.d . vector with zero-mean , unit-variance , and finite 12th moment , and ε is i.i.d . noise independent to z with mean 0 and variance σ2 . Our goal is to compute the population risk R ( f ) = Ex [ ( f∗ ( x ) − f ( x ) ) 2 ] in the proportional asymptotic limit : • ( A1 ) Overparameterized Proportional Limit : n , d→∞ , d/n→ γ ∈ ( 1 , ∞ ) . ( A1 ) entails that the number of features ( or parameters ) is larger than the number of samples , and there exist multiple empirical risk minimizers with potentially different generalization properties . Denote X = [ x > 1 , ... , x > n ] > ∈ Rn×d the data matrix and y ∈ Rn the corresponding label vector . We optimize the parameters θ via a preconditioned gradient flow with preconditioner P ( t ) ∈ Rd×d , ∂θ ( t ) ∂t = −P ( t ) ∂L ( θ ( t ) ) ∂θ ( t ) = 1 n P ( t ) X > ( y −Xθ ( t ) ) , θ ( 0 ) = 0 . ( 3.1 ) In this linear setup , many common choices of preconditioner do not change through time : under Gaussian likelihood , the sample Fisher ( and also Hessian ) corresponds to the sample covariance X > X/n up to variance scaling , whereas the population Fisher corresponds to the population covariance F = ΣX . We thus limit our analysis to fixed preconditioner of the form P ( t ) = : P . Write parameters at time t under update ( 3.1 ) with fixed P as θP ( t ) . For positive definite P , the stationary solution is given as : θ̂P : =limt→∞ θP ( t ) =PX > ( XPX > ) −1y . One may check that discrete time gradient descent update ( with appropriate step size ) and other variants that do not alter the span of gradient ( e.g . stochastic gradient or momentum ) converge to the same solution as well . Intuitively speaking , if the data distribution ( blue contour in Figure 2 ) is not isotropic , then certain directions will be more “ important ” than others . In this case uniform ` 2 shrinkage ( which GD implicitly provides ) may not be most desirable , and certain P that takes data geometry into account may lead to better generalization instead . The above intuition will be made rigorous in this section . Remark . θ̂P is the minimum ‖θ‖P−1 norm interpolant : θ̂P = arg minθ‖θ‖P−1 , s.t.Xθ=y for positive definite P . For GD this translates to the parameter ` 2 norm , whereas for NGD ( P =F−1 ) , the implicit bias is the ‖θ‖ΣX norm . Since E [ f ( x ) 2 ] =‖θ‖2ΣX , NGD finds an interpolating function with smallest norm under the data distribution . We empirically observe this divide between small parameter norm and function norm in neural networks as well ( see Figure 1 and Appendix A.1 ) . We highlight the following choices of P and the corresponding stationary solution θ̂P as t→∞ . • Identity : P =Id recovers GD that converges to the min ` 2 norm interpolant ( also true for momentum GD and SGD ) , which we write as θ̂I : = X > ( XX > ) −1y and refer to as the GD solution . • Population Fisher : P = F−1 = Σ−1X leads to the estimator θ̂F−1 , which we refer to as the NGD solution . • Sample Fisher : since the sample Fisher is rank-deficient , we may add a damping term P = ( X > X+λId ) −1 or take the pseudo-inverse P = ( X > X ) † . In both cases , the gradient is still spanned by X , and thus the update finds the same min ` 2- norm solution θ̂I ( also true for full-matrix Adagrad ( Agarwal et al. , 2018 ) ) , although the trajectory differs ( see Figure 3 ) . Remark . The above choices reveal a gap between sample- and population-based P : while the sample Fisher accelerates optimization ( Zhang et al. , 2019b ) , the following sections demonstrate generalization properties only possessed by the population Fisher . We compare the population risk of the GD solution θ̂I and NGD solution θ̂F−1 in its bias-variance decomposition w.r.t . label noise ( Hastie et al. , 2019 ) and discuss the two components separately , R ( θ ) = Ex [ ( f∗ ( x ) − 〈x , Eε [ θ ] 〉 ) 2 ] ︸ ︷︷ ︸ B ( θ ) , bias + tr ( Cov ( θ ) ΣX ) ︸ ︷︷ ︸ V ( θ ) , variance . Note that the bias does not depend on label noise ε , and the variance does not depend on the teacher model f∗ . Additionally , given that f∗ can be independently decomposed into a linear component on features x and a residual : f∗ ( x ) = 〈x , θ∗〉 + f∗c ( x ) , we can separate the bias term into a wellspecified component ‖θ∗ − Eθ‖2ΣX , which captures the difficulty in learning θ ∗ , and a misspecified component , which corresponds to the error due to fitting f∗c ( beyond what the student can represent ) .
The authors theoretically study the prediction performance of pre-conditioned gradient descent/flow with linear models and squared loss aligning in the setting of least squares regression and non-parametric regression. For parametric least squares, the predication performance of the limiting solution for preconditioned gradient flow i.e. time goes to infinity, is studied in an asymptotic regime where both the number of samples and dimension go to infinity in proportion to one another. Meanwhile for non-parametric regression, source and capacity assumptions are leveraged to achieve finite sample guarantees. Experiments are also conducted on neural networks in a student and teacher setup.
SP:4394b824230526bc513436acebdacb85254e7e81
When does preconditioning help or hurt generalization?
1 INTRODUCTION . We study the generalization property of an estimator θ̂ obtained by minimizing the empirical risk ( or the training error ) L ( fθ ) via a preconditioned gradient update with preconditioner P : θt+1 = θt − ηP ( t ) ∇θtL ( fθt ) , t = 0 , 1 , . . . ( 1.1 ) Setting P = I recovers gradient descent ( GD ) . Choices of P which exploit second-order information include the inverse Fisher information matrix , which gives the natural gradient descent ( NGD ) ( Amari , 1998 ) ; the inverse Hessian , which leads to Newton ’ s method ( LeCun et al. , 2012 ) ; and diagonal matrices estimated from past gradients , which include various adaptive gradient methods ( Duchi et al. , 2011 ; Kingma & Ba , 2014 ) . These preconditioners often alleviate the effect of pathological curvature and speed up optimization , but their impact on generalization has been under debate : Wilson et al . ( 2017 ) reported that in neural network optimization , adaptive or secondorder methods generalize worse compared to gradient descent ( GD ) , whereas other empirical studies showed that second-order methods achieve comparable , if not better generalization ( Xu et al. , 2020 ) . The generalization property of optimizers relates to the discussion of implicit bias ( Gunasekar et al. , 2018a ) , i.e . preconditioning may lead to a different converged solution ( with potentially the same training loss ) , as illustrated in Figure 1 . While many explanations have been proposed , our starting point is the well-known observation that GD often implicitly regularizes the parameter ` 2 norm . For instance in overparameterized least squares regression , GD and many first-order methods find the minimum ` 2 norm solution from zero initialization ( without explicit regularization ) , but preconditioned updates may not . This being said , while the minimum ` 2 norm solution can generalize well ∗Alphabetical ordering . Correspondence to : Denny Wu ( dennywu @ cs.toronto.edu ) . in the overparameterized regime ( Bartlett et al. , 2019 ) , it is unclear whether preconditioning leads to inferior solutions – even in the simple setting of overparameterized linear regression , quantitative understanding of how preconditioning affects generalization is largely lacking . Motivated by the observations above , in Section 3 we start with overparameterized least squares regression ( unregularized ) and analyze the stationary solution ( t→∞ ) of update ( 1.1 ) under time-invariant preconditioner . Extending previous analysis in the proportional limit ( Hastie et al. , 2019 ) , we consider a more general random design setting and derive the exact population risk in its bias-variance decomposition . We characterize the optimal P within a general class of preconditioners for both the bias and variance , and focus on the comparison between GD , for which P is identity , and NGD , for which P is the inverse population Fisher information matrix1 . We find that the comparison of generalization is affected by the following factors : 1 . Label Noise : Additive noise in the labels leads to the variance term in the risk . We prove that NGD achieves the optimal variance among a general class of preconditioned updates . 2 . Model Misspecification : Under misspecification , there does not exist a perfect fθ that recovers the true function ( target ) . We argue that this factor is similar to additional label noise , and thus NGD may also be beneficial when the model is misspecified . 3 . Data-Signal-Alignment : Alignment describes how the target signal distributes among the input features . We show that GD achieves lower bias when signal is isotropic , whereas NGD is preferred under “ misalignment ” — when the target function focuses on small feature directions . Beyond the decomposition of stationary risk , our findings in Section 4 and 5 are summarized as : • In Section 4.1 and 4.2 we discuss how the bias-variance tradeoff can be realized by different choices of preconditioner P ( e.g . interpolating between GD and NGD ) or early stopping . • In Section 4.3 we extend our analysis to regression in the RKHS and show that under early stopping , a preconditioned update interpolating between GD and NGD achieves minimax optimal convergence rate in much fewer steps , and thus reduces the population risk faster than GD . • In Section 5 we empirically test how well our findings in linear model carry over to neural networks : under a student-teacher setup , we compare the generalization of GD with preconditioned updates and illustrate the influence of all aforementioned factors . The performance of neural networks under a variety of manipulations results in trends that align with our theoretical analysis . 2 BACKGROUND AND RELATED WORKS . Natural Gradient Descent . NGD is a second-order method originally proposed in Amari ( 1997 ) . Consider a data distribution p ( x ) on the space X , a function fθ : X → Z parameterized by θ , and a loss function L ( X , fθ ) = 1n ∑n i=1 l ( yi , fθ ( xi ) ) , where l : Y ×Z → R. Also suppose a probability distribution p ( y|z ) = p ( y|fθ ( x ) ) is defined on the space of labels . Then , the natural gradient is defined as : ∇̃θL ( X , fθ ) = F−1∇θL ( X , fθ ) , where F = E [ ∇θ log p ( x , y|θ ) ∇θ log p ( x , y|θ ) > ] is the Fisher information matrix , or simply the ( population ) Fisher . Note that expectations in the Fisher are under the joint distribution of the model p ( x , y|θ ) = p ( x ) p ( y|fθ ( x ) ) . In the literature , the Fisher is sometimes defined under the empirical data distribution { xi } ni=1 ( Amari et al. , 2000 ) . We instead refer to this quantity as the sample Fisher , the properties of which influence optimization and have been studied in Karakida et al . ( 2018 ) ; Kunstner et al . ( 2019 ) ; Thomas et al . ( 2020 ) . We remark that in linear and kernel regression under squared loss , sample Fisher-based updates give the same stationary solution as GD ( see Section 3 ) , whereas population Fisher-based update may not . While the population Fisher is typically difficult to obtain , extra unlabeled data can be used in its estimation , which empirically improves generalization ( Pascanu & Bengio , 2013 ) . Moreover , under structural assumptions , parametric approaches to estimate F can be more sample-efficient ( Martens & Grosse , 2015 ; Ollivier , 2015 ) , and thus closing the gap between sample and population Fisher . 1From now on we use NGD to denote the population Fisher-based update , and we write “ sample NGD ” when P is the inverse or pseudo-inverse of the sample Fisher ; see Section 2 for discussion . When the per-instance loss is the negative log-probability of an exponential family , the sample Fisher coincides with the generalized Gauss-Newton matrix ( Martens , 2014 ) . In least squares regression , which is the focus of this work , the quantity also coincides with the Hessian . We thus take NGD as a representative example of preconditioned update , and we expect our findings to also translate to other second-order methods ( not including adaptive gradient methods ) in regression problems . Analysis of Preconditioned Gradient Descent . While Wilson et al . ( 2017 ) outlined one example under fixed training data where GD generalizes better than adaptive methods , in the online learning setting , for which optimization speed relates to generalization , several works have shown the advantage of preconditioning ( Levy & Duchi , 2019 ; Zhang et al. , 2019a ) . In addition , Zhang et al . ( 2019b ) ; Cai et al . ( 2019 ) established convergence and generalization guarantees of sample Fisher-based updates for neural networks in the kernel regime . Lastly , the generalization of different optimizers relates to the notion of “ sharpness ” ( Keskar et al. , 2016 ; Dinh et al. , 2017 ) , and it has been argued that second-order updates tend to find sharper minima ( Wu et al. , 2018 ) . We note that two concurrent works also discussed the generalization performance of preconditioned updates . Wadia et al . ( 2020 ) connected second-order methods with data whitening in linear models , and qualitatively showed that whitening ( thus second-order update ) harms generalization in certain cases . Vaswani et al . ( 2020 ) analyzed the complexity of the maximum P -margin solution in linear classification problems . We emphasize that instead of upper bounding the risk ( e.g . Rademacher complexity ) , which may not decide the optimal P for generalization error , we compute the exact risk for least squares regression , which allows us to precisely compare different preconditioners . 3 ASYMPTOTIC RISK OF RIDGELESS INTERPOLANTS . In this section we consider the following setup : given n training samples { xi } ni=1 labeled by a teacher model ( target function ) f∗ : Rd→R with additive noise : yi = f∗ ( xi ) + εi , we learn a linear student model fθ by minimizing the squared loss : L ( X , fθ ) = ∑n i=1 ( yi − x > i θ ) 2 . We assume a random design : xi = Σ 1/2 X zi , where zi ∈ Rd is an i.i.d . vector with zero-mean , unit-variance , and finite 12th moment , and ε is i.i.d . noise independent to z with mean 0 and variance σ2 . Our goal is to compute the population risk R ( f ) = Ex [ ( f∗ ( x ) − f ( x ) ) 2 ] in the proportional asymptotic limit : • ( A1 ) Overparameterized Proportional Limit : n , d→∞ , d/n→ γ ∈ ( 1 , ∞ ) . ( A1 ) entails that the number of features ( or parameters ) is larger than the number of samples , and there exist multiple empirical risk minimizers with potentially different generalization properties . Denote X = [ x > 1 , ... , x > n ] > ∈ Rn×d the data matrix and y ∈ Rn the corresponding label vector . We optimize the parameters θ via a preconditioned gradient flow with preconditioner P ( t ) ∈ Rd×d , ∂θ ( t ) ∂t = −P ( t ) ∂L ( θ ( t ) ) ∂θ ( t ) = 1 n P ( t ) X > ( y −Xθ ( t ) ) , θ ( 0 ) = 0 . ( 3.1 ) In this linear setup , many common choices of preconditioner do not change through time : under Gaussian likelihood , the sample Fisher ( and also Hessian ) corresponds to the sample covariance X > X/n up to variance scaling , whereas the population Fisher corresponds to the population covariance F = ΣX . We thus limit our analysis to fixed preconditioner of the form P ( t ) = : P . Write parameters at time t under update ( 3.1 ) with fixed P as θP ( t ) . For positive definite P , the stationary solution is given as : θ̂P : =limt→∞ θP ( t ) =PX > ( XPX > ) −1y . One may check that discrete time gradient descent update ( with appropriate step size ) and other variants that do not alter the span of gradient ( e.g . stochastic gradient or momentum ) converge to the same solution as well . Intuitively speaking , if the data distribution ( blue contour in Figure 2 ) is not isotropic , then certain directions will be more “ important ” than others . In this case uniform ` 2 shrinkage ( which GD implicitly provides ) may not be most desirable , and certain P that takes data geometry into account may lead to better generalization instead . The above intuition will be made rigorous in this section . Remark . θ̂P is the minimum ‖θ‖P−1 norm interpolant : θ̂P = arg minθ‖θ‖P−1 , s.t.Xθ=y for positive definite P . For GD this translates to the parameter ` 2 norm , whereas for NGD ( P =F−1 ) , the implicit bias is the ‖θ‖ΣX norm . Since E [ f ( x ) 2 ] =‖θ‖2ΣX , NGD finds an interpolating function with smallest norm under the data distribution . We empirically observe this divide between small parameter norm and function norm in neural networks as well ( see Figure 1 and Appendix A.1 ) . We highlight the following choices of P and the corresponding stationary solution θ̂P as t→∞ . • Identity : P =Id recovers GD that converges to the min ` 2 norm interpolant ( also true for momentum GD and SGD ) , which we write as θ̂I : = X > ( XX > ) −1y and refer to as the GD solution . • Population Fisher : P = F−1 = Σ−1X leads to the estimator θ̂F−1 , which we refer to as the NGD solution . • Sample Fisher : since the sample Fisher is rank-deficient , we may add a damping term P = ( X > X+λId ) −1 or take the pseudo-inverse P = ( X > X ) † . In both cases , the gradient is still spanned by X , and thus the update finds the same min ` 2- norm solution θ̂I ( also true for full-matrix Adagrad ( Agarwal et al. , 2018 ) ) , although the trajectory differs ( see Figure 3 ) . Remark . The above choices reveal a gap between sample- and population-based P : while the sample Fisher accelerates optimization ( Zhang et al. , 2019b ) , the following sections demonstrate generalization properties only possessed by the population Fisher . We compare the population risk of the GD solution θ̂I and NGD solution θ̂F−1 in its bias-variance decomposition w.r.t . label noise ( Hastie et al. , 2019 ) and discuss the two components separately , R ( θ ) = Ex [ ( f∗ ( x ) − 〈x , Eε [ θ ] 〉 ) 2 ] ︸ ︷︷ ︸ B ( θ ) , bias + tr ( Cov ( θ ) ΣX ) ︸ ︷︷ ︸ V ( θ ) , variance . Note that the bias does not depend on label noise ε , and the variance does not depend on the teacher model f∗ . Additionally , given that f∗ can be independently decomposed into a linear component on features x and a residual : f∗ ( x ) = 〈x , θ∗〉 + f∗c ( x ) , we can separate the bias term into a wellspecified component ‖θ∗ − Eθ‖2ΣX , which captures the difficulty in learning θ ∗ , and a misspecified component , which corresponds to the error due to fitting f∗c ( beyond what the student can represent ) .
The paper studies the effects of preconditioning on generalization properties in deep learning. By using a bias-variance decomposition of the expected risk, the paper determines optimal precondition matrix $P$ for bias and variance. Then the paper analyzes the generalization performance via the aspects: clean labels, well-specified model and aligned signal. Finally, it extends the analysis to the reproducing kernel Hilbert.
SP:4394b824230526bc513436acebdacb85254e7e81
The Compact Support Neural Network
Neural networks are popular and useful in many fields , but they have the problem of giving high confidence responses for examples that are away from the training data . This makes the neural networks very confident in their prediction while making gross mistakes , thus limiting their reliability for safety critical applications such as autonomous driving , space exploration , etc . In this paper , we present a neuron generalization that has the standard dot-product based neuron and the RBF neuron as two extreme cases of a shape parameter . Using ReLU as the activation function we obtain a novel neuron that compact support , which means its output is zero outside a bounded domain . We show how to avoid difficulties in training a neural network with such neurons , by starting with a trained standard neural network and gradually increasing the shape parameter to the desired value . Through experiments on standard benchmark datasets , we show the promise of the proposed approach , in that it can have good prediction on in-distribution samples , while being able to consistently detect and have low confidence on out of distribution samples . 1 INTRODUCTION . Neural networks have been proven to be extremely useful in all sorts of applications , including object detection , speech and handwriting recognition , medical imaging , etc . They have become the state of the art in these applications , and in some cases they even surpass human performance . However , neural networks have been observed to have a major disadvantage : they don ’ t know when they don ’ t know , i.e . don ’ t know when the input is far away from the type of data they have been trained on . Instead of saying “ I don ’ t know ” , they give some output with high confidence ( Goodfellow et al. , 2015 ; Nguyen et al. , 2015 ) . An explanation of why this is happening for ReLU based networks has been given in Hein et al . ( 2019 ) . This issue is very important for safety-critical applications such as space exploration , autonomous driving , medical diagnosis , etc . In these cases it is important that the system know when the input data is outside its nominal range , to alert the human ( e.g . driver for autonomous driving or radiologist for medical diagnostic ) to take charge in such cases . In this paper we suspect that the root of this problem is actually the neuron design , and propose a different type of neuron to address what we think are its issues . The standard neuron can be written as f ( x ) = σ ( wTx + b ) , which can be regarded as a projection ( dot product ) x → wTx + b onto a direction w , followed by a nonlinearity σ ( · ) . In this design , the neuron has a large response for vectors x ∈ Rp that are in a half-space . This can be an advantage when training the NN since it creates high connectivity in the weight space and makes the neurons sensitive to far-away signals . However , it is a disadvantage when using the trained NN , since it can lead to the neurons unpredictably firing with high responses to far-away signals , which can result ( with some probability ) in high confidence responses of the whole network for examples that are far away from the training data . To address these problems , we use a type of radial basis function neuron ( Broomhead & Lowe , 1988 ) , f ( x ) = g ( ‖x − µ‖2 ) , which we modify to have a high response only for examples that are close to µ , and to have zero response at distance at least R from µ . Therefore the neuron has compact support , and the same applies to a layer formed entirely of such neurons . Using one such compact support layer before the output layer we can guarantee that the space where the NN has a non-zero response is bounded , obtaining a more reliable neural network . In this formulation , the parameter vector µ is directly comparable to the neuron inputs x , thus µ has a simple and direct interpretation as a `` template '' . A layer consisting of such neurons forms can be interpreted as a sparse coordinate system on the manifold containing the inputs of that layer . Because of the compact support , the loss function of such a compact support NN has many flat areas and it can be difficult to training it directly by backpropagation . However , we will show how to train such a NN , by starting with a trained regular NN and gradually bending the neuron decision boundaries to make them have smaller and smaller support . The contributions of this paper are the following : • We introduce a type of neuron formulation that generalizes the standard neuron and the RBF neuron as two extreme cases of a shape parameter . Moreover one can smoothly transition from a regular neuron to a RBF neuron by gradually changing this parameter . We introduce the RBF correspondent to a ReLU neuron and observe that it has compact support , i.e . its output is zero outside a bounded domain . • The above construction allows us to smoothly bend the decision boundary of a standard ReLU based neuron , obtaining a compact support neuron . We use this idea to train a compact support neural network ( CSNN ) starting from a pre-trained regular neural network . • We show through experiments on standard datasets that the proposed CSNN can achieve comparable test errors with regular CNNs , and at the same time it can detect and have low confidence on out-of-distribution data . 1.1 RELATED WORK . A common way to address the problem of high confidence predictions for out of distribution ( OOD ) examples is through ensembles ( Lakshminarayanan et al. , 2017 ) , where multiple neural networks are trained with different random initializations and their outputs are averaged in some way . The reason why ensemble methods have low confidence on OOD samples is that the high-confidence domain of each NN is random outside the training data , and the common high-confidence domain is therefore shrunk by the averaging process . This reasoning works well when the representation space ( the space of the NN before the output layer ) is high dimensional , but it fails when this space is low dimensional ( see van Amersfoort et al . ( 2020 ) for example ) . Another popular approach is adversarial training ( Madry et al. , 2018 ) , where the training set is augmented with adversarial examples generated by maximizing the loss starting from slightly perturbed examples . This method is modified in adversarial confidence enhanced training ( ACET ) ( Hein et al. , 2019 ) where the adversarial samples are added through a hybrid loss function . However , we believe that training with out of distribution samples could be a computationally expensive if not hopeless endeavor , since the instance space is extremely vast when it is high dimensional . Consequently , a finite number of training examples can only cover an insignificant part of it and no matter how many out-of-distribution examples are used , there always will be other parts of the instance space that have not been explored . Other methods include the estimation of the uncertainty using dropout ( Gal & Ghahramani , 2016 ) , softmax calibration ( Guo et al. , 2017 ) , and the detection of out-of-distribution inputs ( Hendrycks & Gimpel , 2017 ) . CutMix Yun et al . ( 2019 ) is a method to generate training samples with larger variability , which help improve generalization and OOD detection . All these methods are complementary to our approach and could be used together with our classifiers to improve accuracy and OOD detection . In Ren et al . ( 2019 ) are trained two auto-regressive models , one for the foreground in-distribution data and one for the background , and the likelihood ratio is used to decide for each observation whether it is OOD or not . This is a generative model , while our model is discriminative . A number of works assume that the distance in the representation space ( the space of outputs of the last layer before the final classification layer ) is meaningful . They will be reviewed next . Recently , Jiang et al . ( 2018 ) proposed a trust score that measures the agreement between a given classifier and a modified version of a k-nearest neighbor classifier . While this approach does consider the distance of the test samples to the training set , it only does so to a certain extent since the k-NN does not have a concept of “ too far ” , and is also computationally expensive . A simple method based on the Mahalanobis distance is presented in Lee et al . ( 2018 ) . It assumes that the observations are normally distributed in the representation space , with a shared covariance matrix for all classes . While we also assume that the distance in the representation space is meaningful , we make a much weaker assumption that the observations for each class are clustered in a number of clusters , not necessarily Gaussian . In our representation , each class is usually covered by more than one compact support neuron , and each neuron could be involved in multiple classes . Furthermore , the method in Lee et al . ( 2018 ) simply replaces the last layer of the NN with their Mahalanobis measure and makes no attempt to further train the new model , while we can train our layers together with the whole network . The Generalized ODIN Hsu et al . ( 2020 ) decomposes the output prediction into the ratio of a classspecific function hi ( x ) and a common denominator g ( x ) , both defined over instances x from the representation space . Good results are obtained using hi based on the Euclidean distance or the cosine similarity . Again , this approach assumes that the observations are grouped in a single cluster for each class , which explains it uses very deep models ( with 34-100 layers ) that are more capable to obtain representations where this assumption is satisfied . Our method does not make the single cluster per class assumption , and can use deep or shallow models . The Deterministic Uncertainty Quantification ( DUQ ) ( van Amersfoort et al. , 2020 ) method uses an RBF network and a special gradient penalty to decrease the prediction confidence away from the training examples . The authors also propose a centroid updating scheme to handle the difficulties in training an RBF network . In contrast , our paper proposes a generalized neuron model that has the RBF neurons and the standard neurons as two extreme cases , and trains all models starting from a standard NN where the local minima are more well behaved . 2 THE COMPACT SUPPORT NEURAL NETWORK . The compact support neural network consists of a number of layers , where the last layer before the output layer contains only compact support neurons , which will be described next . The other layers could be regular neural network or convolutional neural network layers , or compact support layers . The final output layer is a regular linear layer without a bias term , so that it can output a vector of all zeros when appropriate .
The paper presents an approach that supports better performance when out of distribution cases occur. It does so by letting neurons be of only compact support and thus if the input is out of distribution (OOD) it is expected to be outside that support and therefore the output will be zero. This is used to detect OOD examples. A parameter alpha is used in the algorithm that determines the size of the support. When it is small then the network acts very similarly to a regular network and when it increases the support is limited. Therefore, to make training stable, they start with a small alpha and then increase it throughout the training.
SP:d5fc92a93b6eb506bfe8cd218f09cb28fdc29b75
The Compact Support Neural Network
Neural networks are popular and useful in many fields , but they have the problem of giving high confidence responses for examples that are away from the training data . This makes the neural networks very confident in their prediction while making gross mistakes , thus limiting their reliability for safety critical applications such as autonomous driving , space exploration , etc . In this paper , we present a neuron generalization that has the standard dot-product based neuron and the RBF neuron as two extreme cases of a shape parameter . Using ReLU as the activation function we obtain a novel neuron that compact support , which means its output is zero outside a bounded domain . We show how to avoid difficulties in training a neural network with such neurons , by starting with a trained standard neural network and gradually increasing the shape parameter to the desired value . Through experiments on standard benchmark datasets , we show the promise of the proposed approach , in that it can have good prediction on in-distribution samples , while being able to consistently detect and have low confidence on out of distribution samples . 1 INTRODUCTION . Neural networks have been proven to be extremely useful in all sorts of applications , including object detection , speech and handwriting recognition , medical imaging , etc . They have become the state of the art in these applications , and in some cases they even surpass human performance . However , neural networks have been observed to have a major disadvantage : they don ’ t know when they don ’ t know , i.e . don ’ t know when the input is far away from the type of data they have been trained on . Instead of saying “ I don ’ t know ” , they give some output with high confidence ( Goodfellow et al. , 2015 ; Nguyen et al. , 2015 ) . An explanation of why this is happening for ReLU based networks has been given in Hein et al . ( 2019 ) . This issue is very important for safety-critical applications such as space exploration , autonomous driving , medical diagnosis , etc . In these cases it is important that the system know when the input data is outside its nominal range , to alert the human ( e.g . driver for autonomous driving or radiologist for medical diagnostic ) to take charge in such cases . In this paper we suspect that the root of this problem is actually the neuron design , and propose a different type of neuron to address what we think are its issues . The standard neuron can be written as f ( x ) = σ ( wTx + b ) , which can be regarded as a projection ( dot product ) x → wTx + b onto a direction w , followed by a nonlinearity σ ( · ) . In this design , the neuron has a large response for vectors x ∈ Rp that are in a half-space . This can be an advantage when training the NN since it creates high connectivity in the weight space and makes the neurons sensitive to far-away signals . However , it is a disadvantage when using the trained NN , since it can lead to the neurons unpredictably firing with high responses to far-away signals , which can result ( with some probability ) in high confidence responses of the whole network for examples that are far away from the training data . To address these problems , we use a type of radial basis function neuron ( Broomhead & Lowe , 1988 ) , f ( x ) = g ( ‖x − µ‖2 ) , which we modify to have a high response only for examples that are close to µ , and to have zero response at distance at least R from µ . Therefore the neuron has compact support , and the same applies to a layer formed entirely of such neurons . Using one such compact support layer before the output layer we can guarantee that the space where the NN has a non-zero response is bounded , obtaining a more reliable neural network . In this formulation , the parameter vector µ is directly comparable to the neuron inputs x , thus µ has a simple and direct interpretation as a `` template '' . A layer consisting of such neurons forms can be interpreted as a sparse coordinate system on the manifold containing the inputs of that layer . Because of the compact support , the loss function of such a compact support NN has many flat areas and it can be difficult to training it directly by backpropagation . However , we will show how to train such a NN , by starting with a trained regular NN and gradually bending the neuron decision boundaries to make them have smaller and smaller support . The contributions of this paper are the following : • We introduce a type of neuron formulation that generalizes the standard neuron and the RBF neuron as two extreme cases of a shape parameter . Moreover one can smoothly transition from a regular neuron to a RBF neuron by gradually changing this parameter . We introduce the RBF correspondent to a ReLU neuron and observe that it has compact support , i.e . its output is zero outside a bounded domain . • The above construction allows us to smoothly bend the decision boundary of a standard ReLU based neuron , obtaining a compact support neuron . We use this idea to train a compact support neural network ( CSNN ) starting from a pre-trained regular neural network . • We show through experiments on standard datasets that the proposed CSNN can achieve comparable test errors with regular CNNs , and at the same time it can detect and have low confidence on out-of-distribution data . 1.1 RELATED WORK . A common way to address the problem of high confidence predictions for out of distribution ( OOD ) examples is through ensembles ( Lakshminarayanan et al. , 2017 ) , where multiple neural networks are trained with different random initializations and their outputs are averaged in some way . The reason why ensemble methods have low confidence on OOD samples is that the high-confidence domain of each NN is random outside the training data , and the common high-confidence domain is therefore shrunk by the averaging process . This reasoning works well when the representation space ( the space of the NN before the output layer ) is high dimensional , but it fails when this space is low dimensional ( see van Amersfoort et al . ( 2020 ) for example ) . Another popular approach is adversarial training ( Madry et al. , 2018 ) , where the training set is augmented with adversarial examples generated by maximizing the loss starting from slightly perturbed examples . This method is modified in adversarial confidence enhanced training ( ACET ) ( Hein et al. , 2019 ) where the adversarial samples are added through a hybrid loss function . However , we believe that training with out of distribution samples could be a computationally expensive if not hopeless endeavor , since the instance space is extremely vast when it is high dimensional . Consequently , a finite number of training examples can only cover an insignificant part of it and no matter how many out-of-distribution examples are used , there always will be other parts of the instance space that have not been explored . Other methods include the estimation of the uncertainty using dropout ( Gal & Ghahramani , 2016 ) , softmax calibration ( Guo et al. , 2017 ) , and the detection of out-of-distribution inputs ( Hendrycks & Gimpel , 2017 ) . CutMix Yun et al . ( 2019 ) is a method to generate training samples with larger variability , which help improve generalization and OOD detection . All these methods are complementary to our approach and could be used together with our classifiers to improve accuracy and OOD detection . In Ren et al . ( 2019 ) are trained two auto-regressive models , one for the foreground in-distribution data and one for the background , and the likelihood ratio is used to decide for each observation whether it is OOD or not . This is a generative model , while our model is discriminative . A number of works assume that the distance in the representation space ( the space of outputs of the last layer before the final classification layer ) is meaningful . They will be reviewed next . Recently , Jiang et al . ( 2018 ) proposed a trust score that measures the agreement between a given classifier and a modified version of a k-nearest neighbor classifier . While this approach does consider the distance of the test samples to the training set , it only does so to a certain extent since the k-NN does not have a concept of “ too far ” , and is also computationally expensive . A simple method based on the Mahalanobis distance is presented in Lee et al . ( 2018 ) . It assumes that the observations are normally distributed in the representation space , with a shared covariance matrix for all classes . While we also assume that the distance in the representation space is meaningful , we make a much weaker assumption that the observations for each class are clustered in a number of clusters , not necessarily Gaussian . In our representation , each class is usually covered by more than one compact support neuron , and each neuron could be involved in multiple classes . Furthermore , the method in Lee et al . ( 2018 ) simply replaces the last layer of the NN with their Mahalanobis measure and makes no attempt to further train the new model , while we can train our layers together with the whole network . The Generalized ODIN Hsu et al . ( 2020 ) decomposes the output prediction into the ratio of a classspecific function hi ( x ) and a common denominator g ( x ) , both defined over instances x from the representation space . Good results are obtained using hi based on the Euclidean distance or the cosine similarity . Again , this approach assumes that the observations are grouped in a single cluster for each class , which explains it uses very deep models ( with 34-100 layers ) that are more capable to obtain representations where this assumption is satisfied . Our method does not make the single cluster per class assumption , and can use deep or shallow models . The Deterministic Uncertainty Quantification ( DUQ ) ( van Amersfoort et al. , 2020 ) method uses an RBF network and a special gradient penalty to decrease the prediction confidence away from the training examples . The authors also propose a centroid updating scheme to handle the difficulties in training an RBF network . In contrast , our paper proposes a generalized neuron model that has the RBF neurons and the standard neurons as two extreme cases , and trains all models starting from a standard NN where the local minima are more well behaved . 2 THE COMPACT SUPPORT NEURAL NETWORK . The compact support neural network consists of a number of layers , where the last layer before the output layer contains only compact support neurons , which will be described next . The other layers could be regular neural network or convolutional neural network layers , or compact support layers . The final output layer is a regular linear layer without a bias term , so that it can output a vector of all zeros when appropriate .
The authors propose a new neural network unit and training algorithm in order to improve OOD detection. Units can smoothly be changed between standard (dot-product) and RBF type through a shape hyperparameter. During training, this hyperparameter is slowly moved in the direction of the RBF shape. Empirical comparisons on three OOD problems are presented, showing that the proposed approach is competitive.
SP:d5fc92a93b6eb506bfe8cd218f09cb28fdc29b75
Succinct Network Channel and Spatial Pruning via Discrete Variable QCQP
1 INTRODUCTION . Deep neural networks are the bedrock of artificial intelligence tasks such as object detection , speech recognition , and natural language processing ( Redmon & Farhadi , 2018 ; Chorowski et al. , 2015 ; Devlin et al. , 2019 ) . While modern networks have hundreds of millions to billions of parameters to train , it has been recently shown that these parameters are highly redundant and can be pruned without significant loss in accuracy ( Han et al. , 2015 ; Guo et al. , 2016 ) . This discovery has led practitioners to desire training and running the models on resource-constrained mobile devices , provoking a large body of research on network pruning . Unstructured pruning , however , does not directly lead to any practical acceleration or memory footprint reduction due to poor data locality ( Wen et al. , 2016 ) , and this motivated research on structured pruning to achieve practical usage under limited resource budgets . To this end , a line of research on channel pruning considers completely pruning the convolution filters along the input and output channel dimensions , where the resulting pruned model becomes a smaller dense network suited for practical acceleration and memory footprint reduction ( Li et al. , 2017 ; Luo et al. , 2017 ; He et al. , 2019 ; Wen et al. , 2016 ; He et al. , 2018a ) . However , existing channel pruning methods perform the pruning operations with a greedy approach and does not consider the inherent quadratic coupling between channels in the neighboring layers . Although these methods are easy to model and optimize , they can not safely remove inactive weights during the pruning procedure , suffer from discrepancies with the true objective , and prohibit the strict satisfaction of the required resource constraints during the pruning process . The ability to specify hard target resource constraints into the pruning optimization process is important since this allows the user to run the pruning and optional finetuning process only once . When the pruning process ignores the target specifications , the users may need to apply multiple rounds of pruning and finetuning until the specifications are eventually met , resulting in an extra computation overhead ( Han et al. , 2015 ; He et al. , 2018a ; Liu et al. , 2017 ) . In this paper , we formulate a principled optimization problem that prunes the network layer channels while respecting the quadratic coupling and exactly satisfying the user-specified FLOPs and j . Note that we use W ( l ) · , j to denote the tensor W ( l ) · , j , · , · , following the indexing rules of NumPy ( Van Der Walt et al. , 2011 ) . memory constraints . This new formulation leads to an interesting discrete variable QCQP ( Quadratic Constrained Quadratic Program ) optimization problem , which directly maximizes the importance of neurons in the pruned network under the specified resource constraints . Also , we increase the pruning granularity beyond channels and jointly prune individual 2D convolution filters spatially for greater efficiency . Furthermore , we generalize our formulation to cover nonsequential convolution operations , such as skip connections , and propose a principled optimization framework for handling various architectural implementations of skip connections in ResNet ( He et al. , 2016 ) . Our experiments on CIFAR-10 and ImageNet datasets show the state of the art results compared to other channel pruning methods that start from pretrained networks . 2 MOTIVATION . In this section , we first discuss the motivation of our method concretely . Suppose the weights in a sequential CNN form a sequence of 4-D tensors , W ( l ) ∈ RCl−1×Cl×Kl×Kl ∀l ∈ [ L ] where Cl−1 , Cl , and Kl represent the number of input channels , the number of output channels , and the filter size of l-th convolution weight tensor , respectively . We denote the feature map after l-th convolution as X ( l ) ∈ RCl×Hl×Wl . Concretely , X ( l ) j = σ ( X ( l−1 ) W ( l ) · , j ) = σ ( ∑Cl−1 i=1 X ( l−1 ) i ∗W ( l ) i , j ) , where σ is the activation function , ∗ denotes 2-D convolution operation , and denotes the sum of channel-wise 2-D convolutions . Now consider pruning these weights in channel-wise direction . We show that naive channel-wise pruning methods prevent exact specification of the target resource constraints due to unpruned inactive weights and deviate away from the true objective by ignoring quadratic coupling between channels in the neighboring layers . 2.1 INACTIVE WEIGHTS . According to Han et al . ( 2015 ) , network pruning produces dead neurons with zero input or output connections . These dead neurons cause inactive weights1 , which do not affect the final output activations of the pruned network . These inactive weights may not be excluded automatically through the standard pruning procedure and require additional post-processing which relies on ad-hoc heuristics . For example , Figure 1 shows a standard channel pruning procedure that deletes weights across the output channel direction but fails to prune the inactive weights . Concretely , deletion of weights on j-th output channel of l-th convolution layer leads to W ( l ) · , j = 0Cl−1 , Kl , Kl . Then , X ( l ) j becomes a dead neuron since X ( l ) j = σ ( X ( l−1 ) W ( l ) · , j ) = σ ( ∑Cl−1 i=1 X ( l−1 ) i ∗W ( l ) i , j ) = 0Cl , Hl , Wl . 1Rigorous mathematical definition of inactive weights is provided in Supplementary material D. The convolution operation on the dead neuron results in a trivially zero output , as below : X ( l+1 ) p = σ ( Cl∑ i=1 X ( l ) i ∗W ( l+1 ) i , p ) = σ ( Cl∑ i=1 1i 6=jX ( l ) i ∗W ( l+1 ) i , p +X ( l ) j︸︷︷︸ dead ∗W ( l+1 ) j , p︸ ︷︷ ︸ inactive︸ ︷︷ ︸ =0Hl+1 , Wl+1 ) . ( 1 ) Equation ( 1 ) shows that the dead neuron X ( l ) j causes weights W ( l+1 ) j , p , ∀p ∈ [ Cl+1 ] to be inactive . Such inactive weights do not account for the actual resource usage , even when they remain in the pruned network , which prevents the exact modeling of the user-specified hard resource constraints ( FLOPs and network size ) . Furthermore , inactive weights unpruned during the pruning procedure are a bigger problem for nonsequential convolutional networks due to their skip connections . To address this problem , we introduce a quadratic optimization-based algorithm that provably eliminates all the inactive weights during the pruning procedure . 2.2 QUADRATIC COUPLING . Existing channel pruning methods remove channels according to their importance . However , measuring a channel ’ s contribution to the network should also take into account the channels in the neighboring layers , as illustrated in Figure 2 . In the example , we define the importance of a channel as the absolute sum of weights in the channel , as in Li et al . ( 2017 ) , and assume the objective is to maximize the absolute sum of weights in the whole pruned network , excluding the inactive weights . We compare two different channel pruning methods : ( a ) a standard channel pruning method that greedily prunes each channel independently , and ( b ) our pruning method that considers the effect of the channels in neighboring layers when pruning . As a result of running each pruning algorithms , ( a ) will prune the second output channel of the first convolution and the third output channel of the second convolution , and ( b ) will prune the first output channel of the first convolution , the third output channel of the second convolution , and the first input channel of the second convolution . The objective values for each pruned networks are ( a ) 18 and ( b ) 21 , respectively . This shows that the coupling effect of the channels in neighboring layers directly affects the objective values , and finally results in a performance gap between ( a ) and ( b ) . We call this coupling relationship as the quadratic coupling between the neighboring layers and formulate the contributions to the objective by quadratic terms of neighboring channel activations . To address this quadratic coupling , we propose a channel pruning method based on the QCQP framework with importance evaluation respecting both the input and the output channels . 3 METHOD . In this section , we first propose our discrete QCQP formulation of channel pruning for the sequential convolutional neural networks ( CNNs ) . Then , we present an extended version of our formulation for joint channel and shape pruning of 2D convolution filters . The generalization to the nonsequential convolution ( skip addition and skip concatenation ) is introduced in Supplementary material A . 3.1 FORMULATION OF CHANNEL PRUNING FOR SEQUENTIAL CNNS . To capture the importance of weights in W ( l ) , we define the importance tensor as I ( l ) ∈ RCl−1×Cl×Kl×Kl+ . Following the protocol of Han et al . ( 2015 ) ; Guo et al . ( 2016 ) , we set I ( l ) = γl ∣∣W ( l ) ∣∣ where γl is the ` 2 normalizing factor in l-th layer or ‖vec ( W ( l ) ) ‖−1 . Then , we define the binary pruning mask as A ( l ) ∈ { 0 , 1 } Cl−1×Cl×Kl×Kl . For channel pruning in sequential CNNs , we define channel activation r ( l ) ∈ { 0 , 1 } Cl to indicate which indices of channels remain in the l-th layer of the pruned network . Then , the weights inW ( l ) i , j are active if and only if r ( l−1 ) i r ( l ) j = 1 , which leads to A ( l ) i , j = r ( l−1 ) i r ( l ) j JKl . For example , in Figure 2b , r ( l−1 ) = [ 1 , 1 , 1 ] ᵀ , r ( l ) = [ 0 , 1 ] ᵀ , and r ( l+1 ) = [ 1 , 1 , 0 ] ᵀ , therefore , A ( l ) = 0 10 1 0 1 ⊗ JKl and A ( l+1 ) = [ 0 0 01 1 0 ] ⊗ JKl+1 . We wish to directly maximize the sum of the importance of active weights after the pruning procedure under given resource constraints : 1 ) FLOPs , 2 ) memory , and 3 ) network size . Concretely , our optimization problem is 2 maximize r ( 0 : L ) L∑ l=1 〈 I ( l ) , A ( l ) 〉 ( 2 ) subject to L∑ l=0 al ∥∥∥r ( l ) ∥∥∥ 1 + L∑ l=1 bl ∥∥∥A ( l ) ∥∥∥ 1 ≤M A ( l ) = r ( l−1 ) r ( l ) ᵀ ⊗ JKl ∀l ∈ [ L ] r ( l ) ∈ { 0 , 1 } Cl . In our formulation , the actual resource usage of the pruned network is exactly computed by specifying the number of channels in the pruned network ( = ‖r ( l ) ‖1 ) and the pruning mask sparsity ( = ‖A ( l ) ‖1 ) in each layer . Concretely , the left hand side of the inequality in the first constraint in Equation ( 2 ) indicates the actual resource usage . Table 1 shows al , bl terms used for computing usage of each resource . Note that this optimization problem is a discrete nonconvex QCQP of the channel activations [ r ( 0 ) , . . . , r ( L ) ] , where the objective , which is the same with the objective in Section 2.2 , respects the quadratic coupling of channel activations ( = r ( l ) ) . Please refer to Supplementary material E for the details on the standard QCQP form of Equation ( 2 ) .
This paper introduces an optimization method for pruning channels in networks. The authors first motivated the proposed approach by showing that current pruning methods will result in "inactive weights" for the following layer. Then the authors introduce a QCQP optimization method that can constrain the exact amout of resources during the optimization process. Extensive experiments are conducted on different benchmarks with different backbones. And the authors also performed spatial pruning to further reduce resource usage.
SP:3879a6f9b904429fa766fe496ca16cefecdc5d02
Succinct Network Channel and Spatial Pruning via Discrete Variable QCQP
1 INTRODUCTION . Deep neural networks are the bedrock of artificial intelligence tasks such as object detection , speech recognition , and natural language processing ( Redmon & Farhadi , 2018 ; Chorowski et al. , 2015 ; Devlin et al. , 2019 ) . While modern networks have hundreds of millions to billions of parameters to train , it has been recently shown that these parameters are highly redundant and can be pruned without significant loss in accuracy ( Han et al. , 2015 ; Guo et al. , 2016 ) . This discovery has led practitioners to desire training and running the models on resource-constrained mobile devices , provoking a large body of research on network pruning . Unstructured pruning , however , does not directly lead to any practical acceleration or memory footprint reduction due to poor data locality ( Wen et al. , 2016 ) , and this motivated research on structured pruning to achieve practical usage under limited resource budgets . To this end , a line of research on channel pruning considers completely pruning the convolution filters along the input and output channel dimensions , where the resulting pruned model becomes a smaller dense network suited for practical acceleration and memory footprint reduction ( Li et al. , 2017 ; Luo et al. , 2017 ; He et al. , 2019 ; Wen et al. , 2016 ; He et al. , 2018a ) . However , existing channel pruning methods perform the pruning operations with a greedy approach and does not consider the inherent quadratic coupling between channels in the neighboring layers . Although these methods are easy to model and optimize , they can not safely remove inactive weights during the pruning procedure , suffer from discrepancies with the true objective , and prohibit the strict satisfaction of the required resource constraints during the pruning process . The ability to specify hard target resource constraints into the pruning optimization process is important since this allows the user to run the pruning and optional finetuning process only once . When the pruning process ignores the target specifications , the users may need to apply multiple rounds of pruning and finetuning until the specifications are eventually met , resulting in an extra computation overhead ( Han et al. , 2015 ; He et al. , 2018a ; Liu et al. , 2017 ) . In this paper , we formulate a principled optimization problem that prunes the network layer channels while respecting the quadratic coupling and exactly satisfying the user-specified FLOPs and j . Note that we use W ( l ) · , j to denote the tensor W ( l ) · , j , · , · , following the indexing rules of NumPy ( Van Der Walt et al. , 2011 ) . memory constraints . This new formulation leads to an interesting discrete variable QCQP ( Quadratic Constrained Quadratic Program ) optimization problem , which directly maximizes the importance of neurons in the pruned network under the specified resource constraints . Also , we increase the pruning granularity beyond channels and jointly prune individual 2D convolution filters spatially for greater efficiency . Furthermore , we generalize our formulation to cover nonsequential convolution operations , such as skip connections , and propose a principled optimization framework for handling various architectural implementations of skip connections in ResNet ( He et al. , 2016 ) . Our experiments on CIFAR-10 and ImageNet datasets show the state of the art results compared to other channel pruning methods that start from pretrained networks . 2 MOTIVATION . In this section , we first discuss the motivation of our method concretely . Suppose the weights in a sequential CNN form a sequence of 4-D tensors , W ( l ) ∈ RCl−1×Cl×Kl×Kl ∀l ∈ [ L ] where Cl−1 , Cl , and Kl represent the number of input channels , the number of output channels , and the filter size of l-th convolution weight tensor , respectively . We denote the feature map after l-th convolution as X ( l ) ∈ RCl×Hl×Wl . Concretely , X ( l ) j = σ ( X ( l−1 ) W ( l ) · , j ) = σ ( ∑Cl−1 i=1 X ( l−1 ) i ∗W ( l ) i , j ) , where σ is the activation function , ∗ denotes 2-D convolution operation , and denotes the sum of channel-wise 2-D convolutions . Now consider pruning these weights in channel-wise direction . We show that naive channel-wise pruning methods prevent exact specification of the target resource constraints due to unpruned inactive weights and deviate away from the true objective by ignoring quadratic coupling between channels in the neighboring layers . 2.1 INACTIVE WEIGHTS . According to Han et al . ( 2015 ) , network pruning produces dead neurons with zero input or output connections . These dead neurons cause inactive weights1 , which do not affect the final output activations of the pruned network . These inactive weights may not be excluded automatically through the standard pruning procedure and require additional post-processing which relies on ad-hoc heuristics . For example , Figure 1 shows a standard channel pruning procedure that deletes weights across the output channel direction but fails to prune the inactive weights . Concretely , deletion of weights on j-th output channel of l-th convolution layer leads to W ( l ) · , j = 0Cl−1 , Kl , Kl . Then , X ( l ) j becomes a dead neuron since X ( l ) j = σ ( X ( l−1 ) W ( l ) · , j ) = σ ( ∑Cl−1 i=1 X ( l−1 ) i ∗W ( l ) i , j ) = 0Cl , Hl , Wl . 1Rigorous mathematical definition of inactive weights is provided in Supplementary material D. The convolution operation on the dead neuron results in a trivially zero output , as below : X ( l+1 ) p = σ ( Cl∑ i=1 X ( l ) i ∗W ( l+1 ) i , p ) = σ ( Cl∑ i=1 1i 6=jX ( l ) i ∗W ( l+1 ) i , p +X ( l ) j︸︷︷︸ dead ∗W ( l+1 ) j , p︸ ︷︷ ︸ inactive︸ ︷︷ ︸ =0Hl+1 , Wl+1 ) . ( 1 ) Equation ( 1 ) shows that the dead neuron X ( l ) j causes weights W ( l+1 ) j , p , ∀p ∈ [ Cl+1 ] to be inactive . Such inactive weights do not account for the actual resource usage , even when they remain in the pruned network , which prevents the exact modeling of the user-specified hard resource constraints ( FLOPs and network size ) . Furthermore , inactive weights unpruned during the pruning procedure are a bigger problem for nonsequential convolutional networks due to their skip connections . To address this problem , we introduce a quadratic optimization-based algorithm that provably eliminates all the inactive weights during the pruning procedure . 2.2 QUADRATIC COUPLING . Existing channel pruning methods remove channels according to their importance . However , measuring a channel ’ s contribution to the network should also take into account the channels in the neighboring layers , as illustrated in Figure 2 . In the example , we define the importance of a channel as the absolute sum of weights in the channel , as in Li et al . ( 2017 ) , and assume the objective is to maximize the absolute sum of weights in the whole pruned network , excluding the inactive weights . We compare two different channel pruning methods : ( a ) a standard channel pruning method that greedily prunes each channel independently , and ( b ) our pruning method that considers the effect of the channels in neighboring layers when pruning . As a result of running each pruning algorithms , ( a ) will prune the second output channel of the first convolution and the third output channel of the second convolution , and ( b ) will prune the first output channel of the first convolution , the third output channel of the second convolution , and the first input channel of the second convolution . The objective values for each pruned networks are ( a ) 18 and ( b ) 21 , respectively . This shows that the coupling effect of the channels in neighboring layers directly affects the objective values , and finally results in a performance gap between ( a ) and ( b ) . We call this coupling relationship as the quadratic coupling between the neighboring layers and formulate the contributions to the objective by quadratic terms of neighboring channel activations . To address this quadratic coupling , we propose a channel pruning method based on the QCQP framework with importance evaluation respecting both the input and the output channels . 3 METHOD . In this section , we first propose our discrete QCQP formulation of channel pruning for the sequential convolutional neural networks ( CNNs ) . Then , we present an extended version of our formulation for joint channel and shape pruning of 2D convolution filters . The generalization to the nonsequential convolution ( skip addition and skip concatenation ) is introduced in Supplementary material A . 3.1 FORMULATION OF CHANNEL PRUNING FOR SEQUENTIAL CNNS . To capture the importance of weights in W ( l ) , we define the importance tensor as I ( l ) ∈ RCl−1×Cl×Kl×Kl+ . Following the protocol of Han et al . ( 2015 ) ; Guo et al . ( 2016 ) , we set I ( l ) = γl ∣∣W ( l ) ∣∣ where γl is the ` 2 normalizing factor in l-th layer or ‖vec ( W ( l ) ) ‖−1 . Then , we define the binary pruning mask as A ( l ) ∈ { 0 , 1 } Cl−1×Cl×Kl×Kl . For channel pruning in sequential CNNs , we define channel activation r ( l ) ∈ { 0 , 1 } Cl to indicate which indices of channels remain in the l-th layer of the pruned network . Then , the weights inW ( l ) i , j are active if and only if r ( l−1 ) i r ( l ) j = 1 , which leads to A ( l ) i , j = r ( l−1 ) i r ( l ) j JKl . For example , in Figure 2b , r ( l−1 ) = [ 1 , 1 , 1 ] ᵀ , r ( l ) = [ 0 , 1 ] ᵀ , and r ( l+1 ) = [ 1 , 1 , 0 ] ᵀ , therefore , A ( l ) = 0 10 1 0 1 ⊗ JKl and A ( l+1 ) = [ 0 0 01 1 0 ] ⊗ JKl+1 . We wish to directly maximize the sum of the importance of active weights after the pruning procedure under given resource constraints : 1 ) FLOPs , 2 ) memory , and 3 ) network size . Concretely , our optimization problem is 2 maximize r ( 0 : L ) L∑ l=1 〈 I ( l ) , A ( l ) 〉 ( 2 ) subject to L∑ l=0 al ∥∥∥r ( l ) ∥∥∥ 1 + L∑ l=1 bl ∥∥∥A ( l ) ∥∥∥ 1 ≤M A ( l ) = r ( l−1 ) r ( l ) ᵀ ⊗ JKl ∀l ∈ [ L ] r ( l ) ∈ { 0 , 1 } Cl . In our formulation , the actual resource usage of the pruned network is exactly computed by specifying the number of channels in the pruned network ( = ‖r ( l ) ‖1 ) and the pruning mask sparsity ( = ‖A ( l ) ‖1 ) in each layer . Concretely , the left hand side of the inequality in the first constraint in Equation ( 2 ) indicates the actual resource usage . Table 1 shows al , bl terms used for computing usage of each resource . Note that this optimization problem is a discrete nonconvex QCQP of the channel activations [ r ( 0 ) , . . . , r ( L ) ] , where the objective , which is the same with the objective in Section 2.2 , respects the quadratic coupling of channel activations ( = r ( l ) ) . Please refer to Supplementary material E for the details on the standard QCQP form of Equation ( 2 ) .
In this manuscript, a new pruning method is proposed by considering the inherent quadratic constraint between consecutive layers. Without this constraint, inactive weights cannot be safely removed. Even with the same objective function, the optimized result is different, as shown in the motivation section. Based on this observation, the pruning task is models as a QCQP optimization problem. And a faster algorithm to solve this problem is proposed. Moreover, the pruning on filter size can also be modeled as the QCQP problem, making the pruning on both channel and filter size feasible.
SP:3879a6f9b904429fa766fe496ca16cefecdc5d02
Graph-Based Continual Learning
1 INTRODUCTION . Recent breakthroughs of deep neural networks often hinge on the ability to repeatedly iterate over stationary batches of training data . When exposed to incrementally available data from non-stationary distributions , such networks often fail to learn new information without forgetting much of its previously acquired knowledge , a phenomenon often known as catastrophic forgetting ( Ratcliff , 1990 ; McCloskey & Cohen , 1989 ; French , 1999 ) . Despite significant advances , the limitation has remained a long-standing challenge for computational systems that aim to continually learn from dynamic data distributions ( Parisi et al. , 2019 ) . Among various proposed solutions , rehearsal approaches that store samples from previous tasks in an episodic memory and regularly replay them are one of the earliest and most successful strategies against catastrophic forgetting ( Lin , 1992 ; Rolnick et al. , 2019 ) . An episodic memory is typically implemented as an array of independent slots ; each slot holds one example coupled with its label . During training , these samples are interleaved with those from the new task , allowing for simultaneous multi-task learning as if the resulting data were independently and identically distributed . While such approaches are effective in simple settings , they require sizable memory and are often impaired by memory constraints , performing rather poorly on complex datasets . A possible explanation is that slot-based memories fail to utilize relational structure between samples ; semantically similar items are treated independently both during training and at test time . In marked contrast , relational memory is a prominent feature of biological systems that has been strongly linked to successful memory retrieval and generalization ( Prince et al. , 2005 ) . Humans , for example , encode event features into cortical representations and bind them together in the medial temporal lobe , resulting in a durable , yet flexible form of memory ( Shimamura , 2011 ) . In this paper , we introduce a novel Graph-based Continual Learning model ( GCL ) that resembles some characteristics of relational memory . More specifically , we explicitly model pairwise similarities between samples , including both those in the episodic memory and those found in the current task . These similarities allow for representation transfer between samples and provide a resilient mean to guard against catastrophic forgetting . Our contributions are twofold : ( 1 ) We propose the use of random graphs to represent relational structures between samples . While similar notions of dependencies have been proposed in the literature ( Louizos et al. , 2019 ; Yao et al. , 2020 ) , the application of random graphs in task-free continual learning is novel , at least to the best of our knowledge . x ( 2 ) We introduce a new regularization objective that leverages such random graphs to alleviate catastrophic forgetting . In contrast to previous work ( Rebuffi et al. , 2017 ; Li & Hoiem , 2017 ) based on knowledge distillation ( Hinton et al. , 2015 ) , the objective penalizes the model for forgetting learned edges between samples rather than their output predictions . Our approach performs competitively on four commonly used datasets , improving accuracy by up to 19.7 % and reducing forgetting by almost 37 % in the best case when bench-marked against competitive baselines in task-free continual learning . 2 PROBLEM FORMULATION . In this work , we follow the learning protocol for image classification from Lopez-Paz & Ranzato ( 2017 ) . More specifically , we consider a training set D = { D1 , · · · , DT } consisting of T tasks where the dataset for the t-th task Dt = { ( xti , yti ) } nt i=1 contains nt input-target pairs ( x t i , y t i ) ∈ X × Y . While the tasks arrive sequentially and exclusively , we assume the input-target pairs ( xti , y t i ) in each task are independent and identically distributed ( i.i.d. ) . The goal is to learn a supervised model fθ : X → Y , parametrized by θ , that outputs a class label y ∈ Y given an unseen image x ∈ X . Following prior work ( Lopez-Paz & Ranzato , 2017 ; Riemer et al. , 2018 ; Chaudhry et al. , 2019 ) , we consider online streams of tasks in which samples from different tasks arrive at different times . As an additional constraint , we insist that the model can only revisit a small amount of data chosen to be stored in a fixed-size episodic memoryM . For clarity , we refer to the data in such an episodic memory as context images and context labels and denote by XC = { xi } i∈C and YC = { yi } i∈C , respectively . These images and labels are to be distinguished from those in the current task , which we refer to as target images and target labels and denote by XT = { xj } j∈T and YT = { yj } j∈T , respectively . While the model is allowed to update the context samples during training , the episodic memory is necessarily frozen at test time . 3 GRAPH-BASED CONTINUAL LEARNING . In this section , we propose a Graph-based Continual Learning ( GCL ) algorithm . While most rehearsal approaches ignore the correlations between images and independently pass them through a network to compute predictions ( Rebuffi et al. , 2017 ; Chaudhry et al. , 2019 ; Aljundi et al. , 2019c ) , we model pairwise similarities between the images with learnable edges in random graphs ( see Figure 1 ) . Intuitively , although it might be easy for the model to forget any particular sample , the multiple connections it forms with similar neighbors are harder to be forgotten altogether . If trained well , the random graphs can therefore equip the model with a plastic and durable means to fight against catastrophic forgetting . Graph Construction . Given a minibatch of target images XT from the current task , our model makes predictions based on the context images XC and context labels YC that span several previously seen tasks , up to and including the current one . In particular , we explicitly build two random graphs of pairwise dependencies : an undirected graph G between the context images XC and a directed , bipartite graph A from the context images XC to the target images XT . Since an undirected graph can be thought of as a directed graph between its vertices and a copy of itself , we treat the context graph G as such and build it analogously to the context-target graph A . Specifically , the high-dimensional context images XC and target images XT are first mapped to the image embeddings UC and UT , respectively , using an image encoder fθ1 : X → Rd1 . Following Louizos et al . ( 2019 ) , we then represent the edges in each graph by independent Bernoulli random variables whose means are specified by a kernel function in the embedding space . More precisely , the distribution of the resulting Erdős-Rényi random graphs ( Erdös & Rényi , 1959 ) can be defined as p ( G |UC ) = ∏ i∈C ∏ k∈C Ber ( Gik |κτ ( ui , uk ) ) , ( 1 ) p ( A |UT , UC ) = ∏ j∈T ∏ k∈C Ber ( Ajk |κτ ( uj , uk ) ) , ( 2 ) for all i , k ∈ C and j ∈ T where κτ : Rd1 × Rd1 → [ 0 , ∞ ) is a kernel function that encodes similarities between image embeddings such as the RBF kernel κτ ( ui , uj ) = exp ( − τ2‖ui − uj‖ 2 2 ) . Here , with a slight abuse of notation , we also use G and A to denote the corresponding adjacency matrices ; Ajk ∈ { 0 , 1 } , for example , represents the presence or absence of a directed edge between the j-th target image and the k-th context image . Predictive Distribution . Given a context graph G and a context-target graph A that encode pairwise similarities to the context images , our next step is to propagate information from the context images XC and context labels YC to make predictions . To that end , we embed XC by another image encoder fθ2 with weights partially tied to the previous one fθ1 , and encode YC by a linear label encoder before concatenating the resulting embeddings into latent representations VC ∈ R|C|×d2 . In combination with the distributions of G and A , we compute context-aware representations for the context images and target images , denoted by { zi } i∈C and { zj } j∈T , respectively : p ( zi |UC , VC ) = ∫ G I { G̃iVC } ( zi ) dP ( G |UC ) ( 3 ) p ( zj |UT , UC , VC ) = ∫ A I { ÃjVC } ( zj ) dP ( A |UT , UC ) . ( 4 ) where G̃i and Ãj indicate the i-th and j-th row of G and A , each normalized to sum to 1 , and IS ( · ) denotes the indicator function on a set S . Intuitively , the representations VC are linearly weighted by each graph sample , and the normalization step ensures proper scaling in case the numbers of edges formed with the context images vary . Once we summarize each image by the context samples , a final network fθ3 : Rd2 → Y takes as input the context-aware representations and produces predictive distributions : p ( yi |XC ) = ∫ zi p ( yi | fθ3 ( zi ) ) dP ( zi |UC , VC ) , ( 5 ) p ( yj |xj , XC ) = ∫ zj p ( yj | fθ3 ( zj ) ) dP ( zj |UT , UC , VC ) . ( 6 ) Since the numbers of random binary graphs G and A are exponential , we approximate the integrals in ( 1 ) - ( 6 ) by Monte Carlo samples . More specifically , we use one sample of G and A during training and 30 samples of A during testing . Also , these graph samples are inherently non-differentiable , so we use the Gumbel-Softmax relaxations of the Bernoulli random variables during training ( Maddison et al. , 2016 ; Jang et al. , 2016 ) . The degree of approximation is controlled by temperature hyperparameters , which exert significant influence over the density of the graph samples . We find that a small temperature for G and a larger temperature for A work well . There are several reasons for making the graphs G and A random . First , the stochasticity induced by the Bernoulli random variables allows us to output multiple predictions and average these predictions , and such ensemble techniques have been quite successful in continual learning settings ( Coop et al. , 2013 ; Fernando et al. , 2017 ) . Perhaps more importantly , we find that the deterministic version with the Bernoulli random variables replaced by their parameters results in very sparse graphs where samples from the same classes are often deemed dissimilar . In a similar fashion to dropout ( Srivastava et al. , 2014 ) , the random edges encourage the model to be less reliant on a few particular edges and therefore promote knowledge transfer between samples . By a similar reasoning , we remove self-edges in the context graph and also observe more connections between samples . Graph Regularization . As training switches to new tasks , the distributional shifts to the target images necessarily result in changes to both the context graph G and the context-target graph A . In addition , the context images are regularly updated to be representative of the data distribution up to that point , so any well-learned connections between the context images are also susceptible to catastrophic forgetting . As a remedy , we save the parameters of the Bernoulli edges to the episodic memory in conjunction with the context images and context labels , and introduce a regularization term that discourages the model from forgetting previously learned edges : L ( b ) G ( θ1 ) , 1 |I ( b ) | ` ( p ( G ( b−1 ) I ( b ) ) , p ( G ( b ) I ( b ) ) ) . ( 7 ) Here , ` ( · , · ) denotes the cross-entropy between two probability distributions , I ( b ) the index set of edges to be regularized in the bth minibatch , and G ( b−1 ) the adjacency matrix learned from the beginning up to the previous minibatch . The selection strategies I ( b ) are discussed in the next subsection . Besides the regularization term , our training objective includes two other cross-entropy losses , one for the context images and another for the target images : L ( θ1 , θ2 , θ3 ) = λC |C| ∑ i∈C ` ( yi , ŷ ( s ) i ) + λT |T | ∑ j∈T ` ( yj , ŷ ( s ) j ) + λGL ( b ) G ( θ1 ) , ( 8 ) where ŷ ( s ) i = fθ3 ( z ( s ) i ) , ŷ ( s ) j = fθ3 ( z ( s ) j ) and z ( s ) i ∼ p ( zi |UC , VC ) , z ( s ) j ∼ p ( zj |UT , UC , VC ) are context-aware samples from Equations 3 and 4 , and λC , λT , λG are hyperparameters . While the graph regularization term appears similar to knowledge distillation ( Hinton et al. , 2015 ) , we emphasize that the former aims to preserve the covariance structures between the outputs of the image encoder fθ1 rather than the outputs themselves . We believe that in light of new data , the image encoder should be able to update its potentially superficial representations of previously seen samples as long as it keeps the correlations between them unchanged . Indeed , some of the early regularization approaches based on knowledge distillation ( Li & Hoiem , 2017 ; Rebuffi et al. , 2017 ) are sometimes too restrictive and reportedly underperform in certain scenarios ( Kemker & Kanan , 2017 ) . Task-Free Knowledge Consolidation . When task identities are not available , we use reservoir sampling ( Vitter , 1985 ) to update the context images and context labels as in Riemer et al . ( 2018 ) . The sampling strategy takes as input a stream of data and randomly replaces a context sample in the episodic memory with a target sample , with probability proportional to the number of samples observed so far . Despite its simplicity , reservoir sampling has been shown to yield strong performance in recent work ( Chaudhry et al. , 2019 ; Riemer et al. , 2018 ; Rolnick et al. , 2019 ) . While most prior work uses task boundaries to perform knowledge consolidation at the end of each task ( Kirkpatrick et al. , 2017 ; Rebuffi et al. , 2017 ) , we update the context graph in memory after every minibatch of training data . In addition , such updates are performed at the sample level to maximize flexibility ; we keep track of the cross entropy loss on each context sample and only update its edges in the graph when the model reaches a new low ( denoted by I ( b ) previously ) . Intuitively , the loss measures how well the model has learned the context image through the connections it forms with others , so meaningful relations are most likely obtained at the bottom of the loss surface . Though samples from the same task often provide more support for each other , the task-agnostic mechanism for updating the context graph also allows for knowledge transfer across tasks when necessary . Memory and Time Complexity . The inclusion of pairwise similarities and graph regularization result in a time and memory complexity ofO ( |M|2+ |M|N ) andO ( |M|2 ) , respectively , where |M| denotes the size of the episodic memory and N the batch size for target images . The quadratic costs in |M| , however , are not concerning in practice , as we deliberately use a small , fixed-size episodic memory . The cost of storing G is often dwarfed by the memory required for storing high-dimensional images , as each edge only needs one floating point number ( see Appendix E for more details on memory usage ) .
This paper presented a memory-based continual learning model where relationships between training samples are represented with a random graph that is defined from the non-linear embedding of the input data. Catastrophic forgetting between tasks is partially (1) alleviated with a graph regularization that penalizes changes of random graph statistics, and (1) memory replay and reservoir sampling to update memory. The performance of this model is evaluated against several state-of-the-art models to handle the catastrophic loss.
SP:4e23c046f8234b35d88e3957b0725fb7a3d06374