paper_name stringlengths 11 170 | text stringlengths 8.07k 307k | summary stringlengths 152 6.16k | paper_id stringlengths 43 43 |
|---|---|---|---|
Does Entity Abstraction Help Generative Transformers Reason? | 1 INTRODUCTION . Transformer language models ( TLMs ; Vaswani et al . 2017 ) have enabled rapid progress in natural language processing ( NLP ) . When pre-trained on large corpora ( such as the web ) to predict the next tokens or a set of masked tokens from an input sequence , TLMs can capture linguistic knowledge ( Peters et al. , 2018 ; Goldberg , 2019 ; Tenney et al. , 2019b ) , and yield state-of-the-art performance on many NLP tasks with little to no task supervision ( Devlin et al. , 2019 ; Radford et al. , 2018 ; 2019 ; Brown et al. , 2020 ) . However , it is not clear if these models can capture higher level knowledge such as reasoning skills that can be re-used in arbitrary contexts , and in ways that leverage the compositionality of those skills ( Lake & Baroni , 2018 ; Liška et al. , 2018 ) , something logical reasoners can do relatively well on a smaller scale ( De Raedt et al. , 2007 ; Fierens et al. , 2015 ) . Simple compositional tasks such as SCAN ( Lake & Baroni , 2018 ) , CLUTRR ( Sinha et al. , 2019 ) , and ProofWritter ( Clark et al. , 2020 ; Tafjord et al. , 2020 ) can help diagnose the compositional generalization behavior of language models . Recent work on some of these datasets showed that TLMs still struggle to learn reasoning strategies that can be re-used in out-of training distribution settings ( Lake & Baroni , 2018 ; Gontier et al. , 2020 ) . If we look at how logical reasoners operate , we find that they have an important abstraction component ( going from grounded entities to higher level concepts ) before logical reasoning can start ( De Raedt et al. , 2007 ) . Going from an original text sequence to its higher-order meaning is an important part of the NLP pipeline ( part of it being entity type tagging ) . Similarly in mathematics , the introduction of generic variables allows to progress in a logical reasoning process without keeping track of every ( grounded ) atomic entity . Overall , this idea that we call abstraction , seems to be an important part of logical reasoning . Recent work suggests that incorporating external knowledge about grounded entities could improve language models ’ abilities to reason and generalize ( Ji et al. , 2017 ; Zhang et al. , 2019 ; Moosavi et al. , 2020 ; Rosset et al. , 2020 ) . However the empirical effect of incorporating generic entity types remains unclear , especially with recent studies suggesting that pre-trained models already encode some of that linguistic knowledge in their parameters ( Hewitt & Manning , 2019 ; Liu et al. , 2019a ; Clark et al. , 2019 ; Tenney et al. , 2019a ) . In this work , we study the effect of explicitly providing entity type abstraction in addition to the original input to pre-trained Transformers . We explore and evaluate different ways to incorporate entity type abstraction and observe that some methods are more efficient than others . To constructs the abstract representation incorporated into TLMs , we leverage symbolic NLP representations such as entity type information given by popular NLP libraries . This allows for automatic and reproducible data processing . In general , our approach is the following : given an input sequence , we build a simplified version of it by replacing entity names by their corresponding entity types . This simplified sequence can then be used as extra input ( Sections 3.1 & 3.2 ) or as extra training signal to the model ( Sections 3.3 ) . In particular , we explore three different ways to augment pre-trained Transformers with this abstract knowledge : 1. by combining token embeddings from both the original and the abstract sequence before encoding ( Section 3.1 ) ( Figure 1a & 1b ) . 2. by encoding both the original and the abstract sequence and combining them before decoding the target output ( Section 3.2 ) ( Figure 1c & 1d ) . 3. by adding a second language model head on top of the Transformer decoder to predict the abstract sequence ( Section 3.3 ) ( Figure 1e ) . A series of controlled experiments on two synthetic datasets show that models having access to abstract knowledge about entity types yield better performance at inference time both when interpolating and extrapolating to unseen lengths of reasoning chains . Synthetic data is used in order to control for the degree of compositional generalization required . Furthermore , in order to understand if the benefits observed in the synthetic cases are applicable in more realistic settings , we ran a series of experiments on two question answering datasets requiring various degrees of multi-hop reasoning . Unfortunately , results on these more natural language datasets show that abstraction aware models are not significantly better than baseline models . Overall our work contributions are the following : 1. we introduce and compare empirically different ways to incorporate abstraction into pre-trained TLMs . 2. we show that incorporating abstract knowledge can significantly improve reasoning and compositional generalization in both interpolation and extrapolation when the environment is formally defined in a logical reasoning setting . 3. we show that abstraction aware models may not benefit much when language is more natural and less procedural . We hope that our work will inspire future research in the field to look for simple inductive biases that can complement pre-trained models in their quest to achieve logical reasoning at scale . 2 RELATED WORK . Augmenting neural language models with knowledge about entities has been a popular method to improve their functionality . Ji et al . ( 2017 ) trained an entity neural language model to predict sequences of entities with an LSTM ( Hochreiter & Schmidhuber , 1997 ) . At each sampling step , they predict the next word alongside a categorical variable indicating the current token ’ s entity ID . They obtained lower perplexity and better results on co-reference resolution and entity prediction tasks than a variety of baselines . Similarly , Rosset et al . ( 2020 ) trained a GPT2 model ( Radford et al. , 2019 ) by giving it access to entity knowledge at the input level and as an additional pre-training loss . Their model achieved better factual correctness on benchmarks such as LAMA ( Petroni et al. , 2019 ) , and performed better than a baseline GPT2 model in various question answering tasks . Inspired by this work and motivated by the goal of building better reasoning language models , we instead focus on the prediction of entity types rather than entity identifiers taken from a fixed list of entities . This allows our solution to be robust to new entities . In addition , we explore and compare different ways to incorporate the entity knowledge in an encoder-decoder architecture . Besides entity knowledge , other types of explicit information has also been given to Transformer models . Levine et al . ( 2019 ) trained a BERT-like model to learn word senses . They gave their model access to WordNet supersenses at the input level and as an additional training loss . They achieve better performance than other baselines on the SemEval Word Sense Disambiguation task ( Raganato et al. , 2017 ) . In addition , Moosavi et al . ( 2020 ) propose to improve robustness to data biases by augmenting the training data with predicate-argument structures . They train a BERT-base model ( Devlin et al. , 2019 ) with PropBank-style semantic role labeling ( Shi & Lin , 2019 ) on MultiNLI ( Williams et al. , 2017 ) and SWAG ( Zellers et al. , 2018 ) datasets . Their results show that incorporating predicate-argument structure in the input sequence ( only during training ) makes the model more robust to adversarial examples in MultiNLI . Furthermore , Sachan et al . ( 2020 ) ask if syntax trees can help Transformers to better extract information . They augmented a pre-trained BERT model with a syntax graph neural network in order to encode syntaxt trees and measure the performance of their model against a BERT model on various tasks . Their results showed that the quality of the trees are highly tied to the performance boost observed . More recently , Porada et al . ( 2021 ) extended a RoBERTa model ( Liu et al. , 2019b ) with hypernym abstraction based on WordNet to evaluate the plausibility of events . Their model is able to better predict human plausibility judgement than other RoBERTa baselines . Although different in application , all these prior works leverage the general idea of explicitly giving more abstract knowledge to Transformer models , hence showing how flexible and generic this strategy can be . We take a similar approach with entity types , but in the hope of improving the reasoning skills of our baseline model . A flurry of recent work has also examined ways to augment TLMs with entities from external knowledge bases ( Zhang et al. , 2019 ; Peters et al. , 2019 ; Févry et al. , 2020 ; Verga et al. , 2020 ) . However , most of the time , these solutions rely on external components such as knowledge graphs with pre-trained entity embeddings , and/or an additional memory . While they often use entity linking as a way to perform co-reference resolution , they do not incorporate higher level of abstractions such as entity types like we do here . 3 INTRODUCING ABSTRACTION INDUCTIVE BIASES . In this section we describe five different ways to incorporate abstraction into a pre-trained encoderdecoder model . Given an input sequence X , we use existing NLP tools such as spacy named entity tagger1 to make a simplified copy Xs of the input . This is a more generic copy of X . We run the spacy recognizer on X to extract entity tags such as PERSON , ORG , GPE , etc ... For each entity type , we create n additional vocabulary entries ( with randomly initialized embeddings ) such as [ PERSON 1 , . . . , PERSON n , ORG 1 , . . . , ORG n , . . . ] . Every token in X is then replaced by their ( randomly numbered ) entity tag to make the simplified sequence Xs . If the same entity is present multiple times in X , each occurrence will be replaced by the same entity tag in Xs . If no entity is found for a token in X , the original token ’ s text is kept in Xs ( e.g . “ Bob Smith has a cat that he loves . Bob also loves Alexandra. ” would be transformed into “ PERSON 11 has a cat that he loves . PERSON 11 also loves PERSON 23. ” ) . The hyper-parameter n is set such that each distinct entity within the same sequence gets a different ID . If the same entity appears more than once in a single example , it will get the same ID every time within that sequence . However , we re-use the same set of entity tags across different examples . The value of n depends on each dataset , details can be seen in Appendix A . Although this may look similar to attributing different entity IDs to each entity regardless of their type , we believe that after seeing many examples , all IDs of the same type will have a similar embedding since we randomly select such ID across examples and across epochs . The number of overlapping entity tags across the entire dataset during many epochs will be large , resulting in pushing the embedding of same-type entity tokens closer together . To test the effect of having multiple tags for the same entity type , we also ran experiments in which we set n = 1 thus forcing overlapping entity tags within each example . However we didn ’ t notice any concluding difference , so the rest of this work will focus on the n > 1 setting described above . 1https : //spacy.io/models/en In the following subsections , we describe different strategies to incorporate Xs into an encoderdecoder Transformer model . | The paper investigates the effect of incorporating entity type abstraction into pre-trained Transformers. To achieve that, the authors have tried five different architectures to build the abstraction aware model. The proposed model is tested on three NLP datasets for reasoning. Empirical results show that entity type abstraction is beneficial in formally defined logical reasoning environments with simple language. While for QA datasets with more natural language, the baseline is already very strong and the improvement of incorporating abstraction is minor. | SP:944ebd7c4c81d2266fe950501917c948f6925bb2 |
Logic Pre-Training of Language Models | 1 INTRODUCTION . Machine reasoning in natural language understanding ( NLU ) aims to teach machines to understand human languages by building and analyzing the connections between the facts , events , and observations using logical analysis techniques like deduction and induction , which is one of the ultimate goals towards human-parity intelligence . Although pre-trained language models ( PrLMs ) , such as BERT ( Devlin et al. , 2018 ) , GPT ( Radford et al. , 2018 ) , XLNet ( Yang et al. , 2019 ) and RoBERTa ( Liu et al. , 2019 ) , have established state-of-the-art performance on various aspects in NLU , they are still short in complex language understanding tasks that involve reasoning ( Helwe et al. , 2021 ) . The major reason behind this is that they are insufficiently capable of capturing logic relations such as negation ( Kassner & Schütze , 2019 ) , factual knowledge ( Poerner et al. , 2019 ) , events ( Rogers et al. , 2020 ) , and so on . Many previous studies ( Sun et al. , 2021 ; Xiong et al. , 2019 ; Wang et al. , 2020 ) are then motivated to inject knowledge into pre-trained models like BERT and RoBERTa . However , they too much rely on massive external knowledge sources and ignore that language itself is a natural knowledge carrier as the basis of acquiring logic reasoning ability ( Ouyang et al. , 2021 ) . Taking the context in Figure 1 as an example , previous approaches tend to focus on entities such as the definition of `` government '' and the concepts related to it like `` governor '' , but overlook the exact relations inherent in this example , thus failing to model the complex reasoning process . Given the fact that PrLMs are the key supporting components in natural language understanding , in this work , we propose a fundamental solution by empowering the PrLMs with the capacity of capturing logic relations , which is necessary for logical reasoning . However , logical reasoning can only be implemented on the basis of clear , accurate , and generalized knowledge . Therefore , we leverage fact as the conceptual knowledge unit to serve the basis for logic relation extraction . Fact is organized as a triplet , i.e. , in the form of predicate-argument structures , to represent the meaning such as `` who-did-what-to-whom '' and `` who-is-what '' . Compared with existing studies that inject complex knowledge like knowledge graphs , the knowledge structure based on fact is far less complicated and more general in representing events and relations in languages . On top of the fact-based knowledge structure , we present PROPHET , a logic-aware pre-trained language model to learn the logic-aware relations in a universal way from very large texts . In detail , we introduce three novel pre-training objectives based on the newly introduced knowledge structure basis fact : 1 ) logical connectives masking for learning sentence-level logic connection . 2 ) logical structure completion task on top of facts for regularization , aligning extracted fact with the original context . 3 ) logical path prediction to capture the logic relationship between facts . PROPHET is evaluated on a broad range of language understanding tasks : natural language inference , semantic similarity , machine reading comprehension , etc . Experimental results show that the fact is useful as the carrier for knowledge modeling , and the newly introduced pre-training tasks can improve PROPHET and achieves significant performance on downstream tasks.1 2 RELATED WORK . 2.1 PRE-TRAINED LANGUAGE MODELS IN NLP . Large pre-trained language models ( Devlin et al. , 2018 ; Liu et al. , 2019 ; Radford et al. , 2018 ) have brought dramatic empirical improvements on almost every NLP task in the past few years . A classical norm of pre-training is to train neural models on a large corpus with self-supervised pre-training objectives . `` Self-supervised '' means that the supervision provided in the training process is automatically generated from the raw text instead of manually generation . Designing effective criteria for language modeling is one of the major topics in training pre-trained models , which decides how the model captures the knowledge from large-scale unlabeled data . The most popular pre-training objective used today is masked language modeling ( MLM ) , initially used in BERT ( Devlin et al. , 2018 ) , which randomly masks out tokens , and the model is asked to uncover it given surrounding context . Recent studies have investigated diverse variants of denoising strategies ( Raffel et al. , 2020 ; Lewis et al. , 2020 ) , model architecture ( Yang et al. , 2019 ) , and auxiliary objectives ( Lan et al. , 2019 ; Joshi et al. , 2020 ) to improve the model strength during pre-training . Although the existing techniques have shown effectiveness in capturing syntactic and semantic information after large-scale pre-training , they perform sensitivity to role reversal and struggles with pragmatic inference and role-based event knowledge ( Rogers et al. , 2020 ) , which are critical to the ultimate goal of complex reasoning that requires to uncover logical structures . However , it is difficult for pre-trained language models to capture the logical structure inherent in the texts since logical supervision is rarely available during pre-training . Therefore , we are motivated to explicitly guide the model to capture such clues via our newly introduced self-supervised tasks . 2.2 REASONING ABILITY FOR PRE-TRAINED LANGUAGE MODELS . There is a lot of work in the research line of enhancing reasoning abilities in pre-trained language models via injecting knowledge . The existing approaches mainly design novel pre-training objectives and leverage abundant knowledge sources such as WordNet ( Miller , 1995 ) . Notably , ERNIE 3.0 ( Sun et al. , 2021 ) uses a broad range of pre-training objectives from word-aware , structure-aware to knowledge-aware tasks , based on a 4TB corpus consisting of plain texts and a large-scale knowledge graph . WKLM ( Xiong et al. , 2019 ) replaces entity mentions in the document with other entities of the same type , and the objective is to distinguish the replaced entity from the original ones . KEPLER ( Wang et al. , 2021b ) encodes textual entity descriptions using embeddings from a PrLM to take full advantage of the abundant textual information . K-Adapter ( Wang et al. , 2020 ) designs neural adapters to distinguish the type of knowledge sources to capture various knowledge . Our proposed method differs from previous studies in three aspects . Firstly , our model does not require any external knowledge resources like previous methods that use WordNet , WikiData , etc . We only use small-scale textual sources following the standard PrLMs like BERT ( Devlin et al. , 2018 ) , along with an off-the-shelf dependency parser to extract facts . Secondly , previous works only consider triplet-level pre-training objectives . We proposed a multi-granularity pre-training strategy , considering not only triplet-level information but also sentence-level and global knowledge to enhance logic reasoning . Finally , we propose a new training mechanism apart from masked language modeling ( MLM ) , hoping to shed light on more logic pre-training strategies in this research line . 1Our codes have been uploaded as supplemental material , which will be open after the double review period . 3 PRELIMINARIES . In this section , we will introduce the concept of fact and logical graph , which is the basis of PROPHET . We will also describe extracting the fact for logical graph construction , as an example shown in Figure 1 . 3.1 FACT . Following Nakashole & Mitchell ( 2014 ) and Ouyang et al . ( 2021 ) , we extract facts which are triplets represented as T = { A1 , P , A2 } , where A1 and A2 are the arguments and P is the predicate between them . It can well represent a broad range of facts , reflecting the notion of `` who-did-what-to-whom '' and `` who-is-what '' , etc . We extract such facts in a syntactic way , which makes our approach generic and easy to apply . Given a document , we first split the document into multiple sentences . For each sentence , we conduct dependency parsing using StanfordCoreNLP ( Manning et al. , 2014 ) .2 For the analyzed dependencies , basically , we consider verb phrases and some prepositions in the sentences as `` predicates '' , and then we search for their corresponding actors and actees as the `` arguments '' . 3.2 LOGICAL GRAPH . A logical graph is an undirected ( but is not required to be connected ) graph that represents logical dependency relation between components in facts . In logical graphs , nodes represent argument/predicates in the fact , and edges indicate whether two nodes have relations in a fact . Such a structure can well unveil and organize semantic information captured by facts . Besides , a logical graph supports considerations among long-range dependencies via connecting arguments and their relations in different facts across different spans . We further show how to construct such graphs based on facts . Despite given relations in facts , we design another two types of edges based on identical mentions and coreference information . ( 1 ) There can be identical mentions in different sentences , resulting in repeated nodes in facts . We connect nodes corresponding to the same non-pronoun arguments by edges with edge type same . ( 2 ) We conduct coreference resolution on context using an off-to-shelf model to identify arguments in facts that refer to the same one.3 We add edges with type coref between them . The final logical graph is denoted as S = ( V , E ) , where V = Ai ∪ P and i ∈ { 1 , 2 } . 2https : //stanfordnlp.github.io/CoreNLP/ , we also tried to use OpenIE directly ; however , the performance is not satisfactory . 3https : //github.com/huggingface/neuralcoref . 4 PROPHET . 4.1 MODEL ARCHITECTURE . We follow BERT ( Devlin et al. , 2018 ) and use a multi-layer bidirectional Transformer ( Vaswani et al. , 2017 ) as the model architecture of PROPHET . For keeping the focus on the newly introduced techniques , we will not review the ubiquitous Transformer architecture in detail . We develop PROPHET by using exactly the same model architecture as BERT-base , where the model consists of 12 transformer layers , with 768 hidden size , 12 attention heads , and 110M model parameters in total . 4.2 LOGIC-AWARE PRE-TRAINING TASKS . We describe three pre-training tasks used for pre-training PROPHET in this section . Figure 2 is an illustration of PROPHET pre-training . The first task is logical connectives masking ( LCM ) generalized from masked language modeling ( Devlin et al. , 2018 ) for logical connectives to learn sentence-level representation . The second task is logical structure completion ( LSC ) for learning logic relationship inside a fact , where we first randomly mask arguments in facts , and then predict those items . Finally , a logical path prediction ( LPP ) task is proposed for recognizing the logical relations of randomly selected node pairs . Logical Connective Masking Logical connective masking is an extension of the masked language modeling ( MLM ) pre-training objective in Devlin et al . ( 2018 ) , with a particular focus on connective indication tokens . We use the Penn Discourse TreeBank 2.0 ( PDTB ) ( Prasad et al. , 2008 ) to draw the logical relations among sentences . Specifically , PDTB 2.0 contains relations that are manually annotated on the 1 million Wall Street Journal ( WSJ ) corpus and are broadly characterized into `` Explicit '' and `` Implicit '' connectives . We use the `` Explicit '' type ( in total 100 such connectives ) , which apparently presents in sentences such as discourse adverbial `` instead '' or subordinating conjunction `` because '' . Taking all the identified connectives and some randomly sampled other tokens ( for a total 15 % of the tokens of the original context ) , we replace them with a [ MASK ] token 80 % of the time , with a random token 10 % of the time and leave them unchanged 10 % of the time . The MLM objective is to predict the original tokens of these sampled tokens , which has proven effective in previous works ( Devlin et al. , 2018 ; Liu et al. , 2019 ) . In this way , the model learns to recover the logical relations for two given sentences , which helps language understanding . The objective of this task is denoted as Lconn . Logical Structure Completion To align representation between the context and the extracted fact , we introduce a pre-training task of logical structure completion . The motivation here is to encourage the model to learn the structure-aware representation that encodes the `` Who-did-What-to-Whom '' like meanings for better language understanding . To speak in detail , we randomly select a specific proportion λ of the total facts ( λ = 20 % in this work ) , from a given context . For each chosen fact , we either ask the model to complete `` Argument-Predicate- ? '' or `` Argument- ? -Argument '' ( the templates are selected based on equal probability ) . We denote all the blanks that need to be completed as ma and mp , denoting arguments and predicates , respectively . In our implementation , this objective is the same as masked language modeling to keep simplicity , by using the original loss following Devlin et al . ( 2018 ) . Lalign = − ∑ i∈a∪p logD ( xi|ma , mp ) , ( 1 ) where D is the discriminator to predicts a token from a large vocabulary . Logical Path Prediction To learn representation from the constructed logical graph , thus endowing the model with global logical reasoning ability , we propose the pre-training task of predicting whether there exists a path between two selected nodes in the logical graph . In this way , the model learns to look at logical relations across a long distance of arguments and predicates in different facts . We randomly sample 20 % nodes from logical graph to form set V ′ , there are in total C2|v′| node pairs . We have a maximum number maxp of node pairs to predict . To avoid bias in the training process , we try to make sure that maxp2 are positive samples and the rest are negative samples , thus balancing positive-negative ratios . If the number of positive/negative samples is less than maxp2 , we just keep the original pairs . Formally , the pre-training objective of this task is calculated as below following Guo et al . ( 2020 ) : LPath = − ∑ vi , j∈V ′ [ δ log σ [ vi , vj ] + ( 1− δ ) log ( 1− σ [ vi , vj ] ) ] , ( 2 ) where δ is 1 when vi and vj have a connected path and 0 otherwise . [ vi , vj ] denotes the concatenation of representations of vi and vj . The final training objective is the weighted sum of the above mentioned three losses . L = Lconn + Lalign + LPath . ( 3 ) | The goal of the paper is to incorporate logical relations into pre-training of language models to solve the reliance of existing reasoning-enabled language models on external knowledge bases. This is done in a self-supervised way - facts (tuple of 2 arguments and a predicate) are parsed using dependency parsing, and then a logical graph is created to denote relationships between coreferents, and between predicates and arguments. Three pre-training objectives are presented over the facts and logical graph: logical connective masking, logical structure completion and logical path prediction. The author's claimed contributions are the following: - 3 new pre-training objectives: logical connective masking, logical structure completion, logical path prediction - The model Prophet which "achieves significant improvement over various logic reasoning involved NLP and NLU downstream tasks" - An analysis that verifies how Prophet is using the context for logical reasoning | SP:2f65e47bd1d28c29907b5061ab6d3e11f9ff1c4b |
Logic Pre-Training of Language Models | 1 INTRODUCTION . Machine reasoning in natural language understanding ( NLU ) aims to teach machines to understand human languages by building and analyzing the connections between the facts , events , and observations using logical analysis techniques like deduction and induction , which is one of the ultimate goals towards human-parity intelligence . Although pre-trained language models ( PrLMs ) , such as BERT ( Devlin et al. , 2018 ) , GPT ( Radford et al. , 2018 ) , XLNet ( Yang et al. , 2019 ) and RoBERTa ( Liu et al. , 2019 ) , have established state-of-the-art performance on various aspects in NLU , they are still short in complex language understanding tasks that involve reasoning ( Helwe et al. , 2021 ) . The major reason behind this is that they are insufficiently capable of capturing logic relations such as negation ( Kassner & Schütze , 2019 ) , factual knowledge ( Poerner et al. , 2019 ) , events ( Rogers et al. , 2020 ) , and so on . Many previous studies ( Sun et al. , 2021 ; Xiong et al. , 2019 ; Wang et al. , 2020 ) are then motivated to inject knowledge into pre-trained models like BERT and RoBERTa . However , they too much rely on massive external knowledge sources and ignore that language itself is a natural knowledge carrier as the basis of acquiring logic reasoning ability ( Ouyang et al. , 2021 ) . Taking the context in Figure 1 as an example , previous approaches tend to focus on entities such as the definition of `` government '' and the concepts related to it like `` governor '' , but overlook the exact relations inherent in this example , thus failing to model the complex reasoning process . Given the fact that PrLMs are the key supporting components in natural language understanding , in this work , we propose a fundamental solution by empowering the PrLMs with the capacity of capturing logic relations , which is necessary for logical reasoning . However , logical reasoning can only be implemented on the basis of clear , accurate , and generalized knowledge . Therefore , we leverage fact as the conceptual knowledge unit to serve the basis for logic relation extraction . Fact is organized as a triplet , i.e. , in the form of predicate-argument structures , to represent the meaning such as `` who-did-what-to-whom '' and `` who-is-what '' . Compared with existing studies that inject complex knowledge like knowledge graphs , the knowledge structure based on fact is far less complicated and more general in representing events and relations in languages . On top of the fact-based knowledge structure , we present PROPHET , a logic-aware pre-trained language model to learn the logic-aware relations in a universal way from very large texts . In detail , we introduce three novel pre-training objectives based on the newly introduced knowledge structure basis fact : 1 ) logical connectives masking for learning sentence-level logic connection . 2 ) logical structure completion task on top of facts for regularization , aligning extracted fact with the original context . 3 ) logical path prediction to capture the logic relationship between facts . PROPHET is evaluated on a broad range of language understanding tasks : natural language inference , semantic similarity , machine reading comprehension , etc . Experimental results show that the fact is useful as the carrier for knowledge modeling , and the newly introduced pre-training tasks can improve PROPHET and achieves significant performance on downstream tasks.1 2 RELATED WORK . 2.1 PRE-TRAINED LANGUAGE MODELS IN NLP . Large pre-trained language models ( Devlin et al. , 2018 ; Liu et al. , 2019 ; Radford et al. , 2018 ) have brought dramatic empirical improvements on almost every NLP task in the past few years . A classical norm of pre-training is to train neural models on a large corpus with self-supervised pre-training objectives . `` Self-supervised '' means that the supervision provided in the training process is automatically generated from the raw text instead of manually generation . Designing effective criteria for language modeling is one of the major topics in training pre-trained models , which decides how the model captures the knowledge from large-scale unlabeled data . The most popular pre-training objective used today is masked language modeling ( MLM ) , initially used in BERT ( Devlin et al. , 2018 ) , which randomly masks out tokens , and the model is asked to uncover it given surrounding context . Recent studies have investigated diverse variants of denoising strategies ( Raffel et al. , 2020 ; Lewis et al. , 2020 ) , model architecture ( Yang et al. , 2019 ) , and auxiliary objectives ( Lan et al. , 2019 ; Joshi et al. , 2020 ) to improve the model strength during pre-training . Although the existing techniques have shown effectiveness in capturing syntactic and semantic information after large-scale pre-training , they perform sensitivity to role reversal and struggles with pragmatic inference and role-based event knowledge ( Rogers et al. , 2020 ) , which are critical to the ultimate goal of complex reasoning that requires to uncover logical structures . However , it is difficult for pre-trained language models to capture the logical structure inherent in the texts since logical supervision is rarely available during pre-training . Therefore , we are motivated to explicitly guide the model to capture such clues via our newly introduced self-supervised tasks . 2.2 REASONING ABILITY FOR PRE-TRAINED LANGUAGE MODELS . There is a lot of work in the research line of enhancing reasoning abilities in pre-trained language models via injecting knowledge . The existing approaches mainly design novel pre-training objectives and leverage abundant knowledge sources such as WordNet ( Miller , 1995 ) . Notably , ERNIE 3.0 ( Sun et al. , 2021 ) uses a broad range of pre-training objectives from word-aware , structure-aware to knowledge-aware tasks , based on a 4TB corpus consisting of plain texts and a large-scale knowledge graph . WKLM ( Xiong et al. , 2019 ) replaces entity mentions in the document with other entities of the same type , and the objective is to distinguish the replaced entity from the original ones . KEPLER ( Wang et al. , 2021b ) encodes textual entity descriptions using embeddings from a PrLM to take full advantage of the abundant textual information . K-Adapter ( Wang et al. , 2020 ) designs neural adapters to distinguish the type of knowledge sources to capture various knowledge . Our proposed method differs from previous studies in three aspects . Firstly , our model does not require any external knowledge resources like previous methods that use WordNet , WikiData , etc . We only use small-scale textual sources following the standard PrLMs like BERT ( Devlin et al. , 2018 ) , along with an off-the-shelf dependency parser to extract facts . Secondly , previous works only consider triplet-level pre-training objectives . We proposed a multi-granularity pre-training strategy , considering not only triplet-level information but also sentence-level and global knowledge to enhance logic reasoning . Finally , we propose a new training mechanism apart from masked language modeling ( MLM ) , hoping to shed light on more logic pre-training strategies in this research line . 1Our codes have been uploaded as supplemental material , which will be open after the double review period . 3 PRELIMINARIES . In this section , we will introduce the concept of fact and logical graph , which is the basis of PROPHET . We will also describe extracting the fact for logical graph construction , as an example shown in Figure 1 . 3.1 FACT . Following Nakashole & Mitchell ( 2014 ) and Ouyang et al . ( 2021 ) , we extract facts which are triplets represented as T = { A1 , P , A2 } , where A1 and A2 are the arguments and P is the predicate between them . It can well represent a broad range of facts , reflecting the notion of `` who-did-what-to-whom '' and `` who-is-what '' , etc . We extract such facts in a syntactic way , which makes our approach generic and easy to apply . Given a document , we first split the document into multiple sentences . For each sentence , we conduct dependency parsing using StanfordCoreNLP ( Manning et al. , 2014 ) .2 For the analyzed dependencies , basically , we consider verb phrases and some prepositions in the sentences as `` predicates '' , and then we search for their corresponding actors and actees as the `` arguments '' . 3.2 LOGICAL GRAPH . A logical graph is an undirected ( but is not required to be connected ) graph that represents logical dependency relation between components in facts . In logical graphs , nodes represent argument/predicates in the fact , and edges indicate whether two nodes have relations in a fact . Such a structure can well unveil and organize semantic information captured by facts . Besides , a logical graph supports considerations among long-range dependencies via connecting arguments and their relations in different facts across different spans . We further show how to construct such graphs based on facts . Despite given relations in facts , we design another two types of edges based on identical mentions and coreference information . ( 1 ) There can be identical mentions in different sentences , resulting in repeated nodes in facts . We connect nodes corresponding to the same non-pronoun arguments by edges with edge type same . ( 2 ) We conduct coreference resolution on context using an off-to-shelf model to identify arguments in facts that refer to the same one.3 We add edges with type coref between them . The final logical graph is denoted as S = ( V , E ) , where V = Ai ∪ P and i ∈ { 1 , 2 } . 2https : //stanfordnlp.github.io/CoreNLP/ , we also tried to use OpenIE directly ; however , the performance is not satisfactory . 3https : //github.com/huggingface/neuralcoref . 4 PROPHET . 4.1 MODEL ARCHITECTURE . We follow BERT ( Devlin et al. , 2018 ) and use a multi-layer bidirectional Transformer ( Vaswani et al. , 2017 ) as the model architecture of PROPHET . For keeping the focus on the newly introduced techniques , we will not review the ubiquitous Transformer architecture in detail . We develop PROPHET by using exactly the same model architecture as BERT-base , where the model consists of 12 transformer layers , with 768 hidden size , 12 attention heads , and 110M model parameters in total . 4.2 LOGIC-AWARE PRE-TRAINING TASKS . We describe three pre-training tasks used for pre-training PROPHET in this section . Figure 2 is an illustration of PROPHET pre-training . The first task is logical connectives masking ( LCM ) generalized from masked language modeling ( Devlin et al. , 2018 ) for logical connectives to learn sentence-level representation . The second task is logical structure completion ( LSC ) for learning logic relationship inside a fact , where we first randomly mask arguments in facts , and then predict those items . Finally , a logical path prediction ( LPP ) task is proposed for recognizing the logical relations of randomly selected node pairs . Logical Connective Masking Logical connective masking is an extension of the masked language modeling ( MLM ) pre-training objective in Devlin et al . ( 2018 ) , with a particular focus on connective indication tokens . We use the Penn Discourse TreeBank 2.0 ( PDTB ) ( Prasad et al. , 2008 ) to draw the logical relations among sentences . Specifically , PDTB 2.0 contains relations that are manually annotated on the 1 million Wall Street Journal ( WSJ ) corpus and are broadly characterized into `` Explicit '' and `` Implicit '' connectives . We use the `` Explicit '' type ( in total 100 such connectives ) , which apparently presents in sentences such as discourse adverbial `` instead '' or subordinating conjunction `` because '' . Taking all the identified connectives and some randomly sampled other tokens ( for a total 15 % of the tokens of the original context ) , we replace them with a [ MASK ] token 80 % of the time , with a random token 10 % of the time and leave them unchanged 10 % of the time . The MLM objective is to predict the original tokens of these sampled tokens , which has proven effective in previous works ( Devlin et al. , 2018 ; Liu et al. , 2019 ) . In this way , the model learns to recover the logical relations for two given sentences , which helps language understanding . The objective of this task is denoted as Lconn . Logical Structure Completion To align representation between the context and the extracted fact , we introduce a pre-training task of logical structure completion . The motivation here is to encourage the model to learn the structure-aware representation that encodes the `` Who-did-What-to-Whom '' like meanings for better language understanding . To speak in detail , we randomly select a specific proportion λ of the total facts ( λ = 20 % in this work ) , from a given context . For each chosen fact , we either ask the model to complete `` Argument-Predicate- ? '' or `` Argument- ? -Argument '' ( the templates are selected based on equal probability ) . We denote all the blanks that need to be completed as ma and mp , denoting arguments and predicates , respectively . In our implementation , this objective is the same as masked language modeling to keep simplicity , by using the original loss following Devlin et al . ( 2018 ) . Lalign = − ∑ i∈a∪p logD ( xi|ma , mp ) , ( 1 ) where D is the discriminator to predicts a token from a large vocabulary . Logical Path Prediction To learn representation from the constructed logical graph , thus endowing the model with global logical reasoning ability , we propose the pre-training task of predicting whether there exists a path between two selected nodes in the logical graph . In this way , the model learns to look at logical relations across a long distance of arguments and predicates in different facts . We randomly sample 20 % nodes from logical graph to form set V ′ , there are in total C2|v′| node pairs . We have a maximum number maxp of node pairs to predict . To avoid bias in the training process , we try to make sure that maxp2 are positive samples and the rest are negative samples , thus balancing positive-negative ratios . If the number of positive/negative samples is less than maxp2 , we just keep the original pairs . Formally , the pre-training objective of this task is calculated as below following Guo et al . ( 2020 ) : LPath = − ∑ vi , j∈V ′ [ δ log σ [ vi , vj ] + ( 1− δ ) log ( 1− σ [ vi , vj ] ) ] , ( 2 ) where δ is 1 when vi and vj have a connected path and 0 otherwise . [ vi , vj ] denotes the concatenation of representations of vi and vj . The final training objective is the weighted sum of the above mentioned three losses . L = Lconn + Lalign + LPath . ( 3 ) | The paper proposes a new pre-training technique to induce a logical prior in the language model representation. Concretely, they propose pre-training on facts, represented as knowledge base triples (source, sink, relation) (knowledge-base completion) and link prediction, alongside traditional masked language modeling objective. Their proposed method achieves some improvement over downstream tasks, including a subset of GLUE benchmark and a couple of relation prediction datasets. | SP:2f65e47bd1d28c29907b5061ab6d3e11f9ff1c4b |
Distributionally Robust Recourse Action | Recourse actions aim to explain a particular algorithmic decision by showing one or multiple ways in which the instance could be modified to receive an alternate outcome . Existing recourse recommendations often assume that the machine learning models do not change over time . However , this assumption does not always hold in practice because of data distribution shifts , and in this case , the recourse actions may become invalid . To redress this shortcoming , we propose the Distributionally Robust Recourse Action framework , which generates a recourse action that has high probability of being valid under a mixture of model shifts . We show that the robust recourse can be found efficiently using a projected gradient descent algorithm and we discuss several extensions of our framework . Numerical experiments with both synthetic and real-world datasets demonstrate the benefits of our proposed framework . 1 INTRODUCTION . Post-hoc explanations of machine learning models are useful for understanding and making reliable predictions in consequential domains such as loan approvals , college admission and healthcare . Recently , recourse is rising as an attractive tool do diagnose why the machine learning models have made a particular decision for a given instance . Recourse work by providing possible actions to modify a given instance to receive an alternate decision ( Ustun et al. , 2019 ) . Consider , for example , the case of loan approvals in which a credit application is rejected . The counterfactual will offer the reasons for rejection by showing what the application package should have been to get approved . A concrete example of a counterfactual in this case may be “ the monthly salary should be higher by $ 500 ” or “ 20 % of the current debt should be reduced ” . Recourses have a positive , forward-looking meaning : they list out the recourse actions that a person should implement so that they can get a more favorable outcome in the future . If a specific application can provide the negative outcomes with recourse actions , it can improve the user engagement and boost the interpretability at the same time ( Ustun et al. , 2019 ; Karimi et al. , 2021 ) . Explanations thus play a central role in the future development of human-centric machine learning . Despite its attractiveness , providing recourse for the negative instances is not a trivial task . For realworld implementation , designing a recourse needs to strike an intricate balance between conflicting criteria . First and foremost , a recourse action should be feasible : if the prescribed action is taken , then the prediction of a machine learning model should be flipped . At the same time , a framework for generating recourse should minimize the cost to take recourse actions to avoid making a drastic change to the characteristics of the input instance . An algorithm for finding recourse must make change to only features that are actionable , and should leave immutable features ( relatively ) unchanged . For example , we must consider date of birth as an immutable feature ; in contrast , we can consider salary or debt amount as actionable features . Various solutions has been proposed to provide recourses for a model prediction ( Karimi et al. , 2021 ; Stepin et al. , 2021 ; Artelt & Hammer , 2019 ) . For instance , Ustun et al . ( 2019 ) used an integer programming approach to obtain actionable recourses , and also provide a feasibility guarantee for linear models . Karimi et al . ( 2020 ) proposed a model-agnostic approach to generate nearest counterfactual explanations and focus on structured data . Dandl et al . ( 2020 ) proposed a method which finds counterfactual by solving a multi-objective optimization problem . Recently , Russell ( 2019 ) and Mothilal et al . ( 2020 ) focus on finding a set of multiple diverse recourse actions , where the diversity is imposed by a rule-based approach or by internalize a determinant point process cost in the objective function . These aforementioned approaches make a fundamental assumption that the machine learning model does not change over time . However , the dire reality suggests that this assumption rarely holds . In fact , data shifts are so common nowadays in machine learning that they have sparkled the emerging field of domain generalization and domain adaptation . Organizations usually retrain models as a response to data shifts and this induces corresponding shifts in the machine learning models parameters , which in turns cause serious concerns for the feasibility of the recourse action in the future ( Rawal et al. , 2021 ) . In fact , all of the aforementioned approaches design the action which is feasible only with the current model parameters , and they provide no feasibility guarantee for the future parameters . If a recourse action fails to generate a favorable outcome in the future , then the recourse action may become less beneficial ( Venkatasubramanian & Alfano , 2020 ) , the pledge of a brighter outcome is shattered , and the trust on the machine learning system is lost ( Rudin , 2019 ; Ribeiro et al. , 2016 ) . To tackle this challenge , Upadhyay et al . ( 2021 ) proposed ROAR , a framework for generating instance level recourses ( counterfactual explanations ) that are robust to shifts in the underlying predictive model . ROAR used a robust optimization approach that hedges against an uncertainty set containing plausible values of the future model parameters . However , it is well-known that robust optimization solutions can be overly conservative because they may hedge against a pathological parameter in the uncertainty set . A promising approach that can promote robustness , while at the same time prevent from over-conservatism is the distributionally robust optimization framework ( El Ghaoui et al. , 2003 ; Delage & Ye , 2010 ; Rahimian & Mehrotra , 2019 ; Bertsimas et al. , 2018 ) . This framework models the future model parameters as random variables whose underlying distribution is unknown , but is likely to be contained in an ambiguity set . The solution is designed to counter the worst-case distribution in the ambiguity set in a min-max sense . Distributionally robust optimization is also gaining popularity in many estimation and prediction tasks in machine learning ( Namkoong & Duchi , 2017 ; Kuhn et al. , 2019 ) . Contributions . This paper combines ideas and techniques from two principal branches of explainable artificial intelligence : counterfactual explanations and robustness , in order to resolve the recourse problem under uncertainty . Concretely , our main contributions are the following : 1 . We propose the framework of Distributionally Robust Recourse Action ( DiRRAc ) for designing a recourse action that is robust to mixture shifts of the model parameters . Our DiRRAc maximizes the probability that the action is feasible with respect to a mixture shift of model parameters , while at the same time cap the action in the neighborhood of the input instance . Moreover , the DiRRAc model also hedges against the misspecification of the nominal distribution using a min-max form with a mixture ambiguity set prescribed by moment information . 2 . We reformulate the DiRRAc problem into a finite-dimensional optimization problem with an explicit objective function . We also provide a projected gradient descent to solve the resulting reformulation with convergence guarantees . 3 . We extend our DiRRAc framework along several axis to handle mixture weight uncertainty , to minimize the worst-case component probability of receiving unfavorable outcome , and also to inject the Gaussian parametric information . We first describe the recourse action problem with mixture shift in Section 2 . In Section 3 , we present our proposed DiRRAc framework , its reformulation and the numerical routine for solving it . The extension to the parametric Gaussian setting will be subsequently discussed in Section 4 . Section 5 reports the numerical experiments showing the benefits of the DiRRAc framework and its extensions . Notations . For each integer K , we have [ K ] = { 1 , . . . , K } . We use Sd+ ( Sd++ ) to denote the space of symmetric positive semidefinite ( definite , respectively ) matrices . For any A ∈ Rm×m , the trace operator is defined as Tr [ A ] = ∑d i=1Aii . We write Qk ∼ ( µk , Σk ) to denote that the distribution Qk has mean vector µk and covariance matrix Σk . If additionally Qk is Gaussian , we write Qk ∼ N ( µk , Σk ) . With a slight abuse of notation , Q ∼ ( Qk , pk ) k∈ [ K ] means Q is a mixture of K component distributions , the k-th component has weight pk and distribution Qk . 2 RECOURSE ACTION UNDER MIXTURE SHIFTS . We consider a binary classification setting with label Y = { 0 , 1 } , where 0 represents the unfavorable outcome while 1 denotes the favorable one . The covariate space is Rd , and any linear classifier Cθ : Rd → Y characterized by the d-dimensional parameter θ is of the form Cθ ( x ) = { 1 if θ > x ≥ 0 , 0 otherwise . Note that the bias term can be internalized into θ by adding an extra dimension , and thus it is omitted . Suppose that at this moment ( t = 0 ) , the current classifier is parametrized by θ0 , and we are given an input instance x0 ∈ Rd with unfavorable outcome , that is , Cθ0 ( x0 ) = 0 . One period of time from now ( t = 1 ) , the parameters of the predictive model will change stochastically , and are represented by a d-dimensional random vector θ̃ . This paper focuses on finding a recourse action x which is reasonably close to the instance x0 , and at the same time , has a high probability of receiving a favorable outcome in the future . Figure 1 gives a bird ’ s eye view of the setup To measure the closeness between the action x and the input x0 , we assume that the covariate space is endowed with a non-negative , continuous cost function c. In addition , suppose temporarily that θ̃ follows a distribution P̂ . Because maximizing the probability of the favorable outcome is equivalent to minimizing the probability of the unfavorable outcome , the recourse can be found by solving min x P̂ ( Cθ̃ ( x ) = 0 ) s. t. x ∈ X , c ( x , x0 ) ≤ δ . ( 1 ) The parameter δ ≥ 0 in ( 1 ) governs how far a recourse action can be from the input instance x0 . Note that we constrain x in a set X which captures operational constraints , for example , the highest education of a credit applicant should not be decreasing over time . In this paper , we model the random vector θ̃ using a finite mixture of distributions with K components , the mixture weights are p̂ satisfying ∑ k∈ [ K ] p̂k = 1 . Each component in the mixture represents one specific type of data shifts : the weights p̂ reflect the proportion of the shift types while the component distribution P̂k representing the ( conditional ) distribution of the future model parameters in the k-th shift . Further information on mixture distributions and their applications in machine learning can be found in Murphy ( 2012 , §3.5 ) . If each P̂k is a Gaussian distribution N ( θ̂k , Σ̂k ) , then P̂ is a mixture of Gaussian distributions . The objective of problem ( 1 ) can be expressed as P̂ ( Cθ̃ ( x ) = 0 ) = ∑ k∈ [ K ] p̂kP̂k ( Cθ̃ ( x ) = 0 ) = ∑ k∈ [ K ] p̂kΦ ( −x > θ̂k√ x > Σ̂kx ) , where the first equality follows from the law of conditional probability , and Φ is the cumulative distribution function of a standard Gaussian distribution . Under the Gaussian assumption , we can solve ( 1 ) using a projected gradient descent type of algorithm ( Boyd & Vandenberghe , 2004 ) . Remark 2.1 ( Nonlinear models ) . Our analysis focuses on linear classifiers , which is a common setup in the literature ( Upadhyay et al. , 2021 ; Ustun et al. , 2019 ; Rawal et al. , 2021 ; Karimi et al. , 2020 ; Wachter et al. , 2018 ; Ribeiro et al. , 2016 ) . To extend to nonlinear classifiers , we can follow a similar approach as in Rawal & Lakkaraju ( 2020 ) and Upadhyay et al . ( 2021 ) by first using LIME ( Ribeiro et al. , 2016 ) to approximate the nonlinear classifiers locally with an interpretable linear model , then subsequently applying our framework . | The paper provides a framework for recourse (i.e. counterfactual explanations) that is robust to shifts in the model. They formulate the robustified recourse setup as a min-max optimization problem, where the max is over a neighborhood around the distribution over model parameters. The model parameters are drawn from a mixture of K distributions, so that the neighborhood is specified by Gelbrich distance on each component. They propose a finite-dimensional version of the robustified optimization problem, which can be optimized using projected gradient descent. They evaluate their approach on the German credit dataset, the Small Business Administration dataset, and the Student performance dataset, each of which demonstrates a different type of data distribution shift. | SP:a08c91391aee4c0a73b034f1e56c2852f0babd14 |
Distributionally Robust Recourse Action | Recourse actions aim to explain a particular algorithmic decision by showing one or multiple ways in which the instance could be modified to receive an alternate outcome . Existing recourse recommendations often assume that the machine learning models do not change over time . However , this assumption does not always hold in practice because of data distribution shifts , and in this case , the recourse actions may become invalid . To redress this shortcoming , we propose the Distributionally Robust Recourse Action framework , which generates a recourse action that has high probability of being valid under a mixture of model shifts . We show that the robust recourse can be found efficiently using a projected gradient descent algorithm and we discuss several extensions of our framework . Numerical experiments with both synthetic and real-world datasets demonstrate the benefits of our proposed framework . 1 INTRODUCTION . Post-hoc explanations of machine learning models are useful for understanding and making reliable predictions in consequential domains such as loan approvals , college admission and healthcare . Recently , recourse is rising as an attractive tool do diagnose why the machine learning models have made a particular decision for a given instance . Recourse work by providing possible actions to modify a given instance to receive an alternate decision ( Ustun et al. , 2019 ) . Consider , for example , the case of loan approvals in which a credit application is rejected . The counterfactual will offer the reasons for rejection by showing what the application package should have been to get approved . A concrete example of a counterfactual in this case may be “ the monthly salary should be higher by $ 500 ” or “ 20 % of the current debt should be reduced ” . Recourses have a positive , forward-looking meaning : they list out the recourse actions that a person should implement so that they can get a more favorable outcome in the future . If a specific application can provide the negative outcomes with recourse actions , it can improve the user engagement and boost the interpretability at the same time ( Ustun et al. , 2019 ; Karimi et al. , 2021 ) . Explanations thus play a central role in the future development of human-centric machine learning . Despite its attractiveness , providing recourse for the negative instances is not a trivial task . For realworld implementation , designing a recourse needs to strike an intricate balance between conflicting criteria . First and foremost , a recourse action should be feasible : if the prescribed action is taken , then the prediction of a machine learning model should be flipped . At the same time , a framework for generating recourse should minimize the cost to take recourse actions to avoid making a drastic change to the characteristics of the input instance . An algorithm for finding recourse must make change to only features that are actionable , and should leave immutable features ( relatively ) unchanged . For example , we must consider date of birth as an immutable feature ; in contrast , we can consider salary or debt amount as actionable features . Various solutions has been proposed to provide recourses for a model prediction ( Karimi et al. , 2021 ; Stepin et al. , 2021 ; Artelt & Hammer , 2019 ) . For instance , Ustun et al . ( 2019 ) used an integer programming approach to obtain actionable recourses , and also provide a feasibility guarantee for linear models . Karimi et al . ( 2020 ) proposed a model-agnostic approach to generate nearest counterfactual explanations and focus on structured data . Dandl et al . ( 2020 ) proposed a method which finds counterfactual by solving a multi-objective optimization problem . Recently , Russell ( 2019 ) and Mothilal et al . ( 2020 ) focus on finding a set of multiple diverse recourse actions , where the diversity is imposed by a rule-based approach or by internalize a determinant point process cost in the objective function . These aforementioned approaches make a fundamental assumption that the machine learning model does not change over time . However , the dire reality suggests that this assumption rarely holds . In fact , data shifts are so common nowadays in machine learning that they have sparkled the emerging field of domain generalization and domain adaptation . Organizations usually retrain models as a response to data shifts and this induces corresponding shifts in the machine learning models parameters , which in turns cause serious concerns for the feasibility of the recourse action in the future ( Rawal et al. , 2021 ) . In fact , all of the aforementioned approaches design the action which is feasible only with the current model parameters , and they provide no feasibility guarantee for the future parameters . If a recourse action fails to generate a favorable outcome in the future , then the recourse action may become less beneficial ( Venkatasubramanian & Alfano , 2020 ) , the pledge of a brighter outcome is shattered , and the trust on the machine learning system is lost ( Rudin , 2019 ; Ribeiro et al. , 2016 ) . To tackle this challenge , Upadhyay et al . ( 2021 ) proposed ROAR , a framework for generating instance level recourses ( counterfactual explanations ) that are robust to shifts in the underlying predictive model . ROAR used a robust optimization approach that hedges against an uncertainty set containing plausible values of the future model parameters . However , it is well-known that robust optimization solutions can be overly conservative because they may hedge against a pathological parameter in the uncertainty set . A promising approach that can promote robustness , while at the same time prevent from over-conservatism is the distributionally robust optimization framework ( El Ghaoui et al. , 2003 ; Delage & Ye , 2010 ; Rahimian & Mehrotra , 2019 ; Bertsimas et al. , 2018 ) . This framework models the future model parameters as random variables whose underlying distribution is unknown , but is likely to be contained in an ambiguity set . The solution is designed to counter the worst-case distribution in the ambiguity set in a min-max sense . Distributionally robust optimization is also gaining popularity in many estimation and prediction tasks in machine learning ( Namkoong & Duchi , 2017 ; Kuhn et al. , 2019 ) . Contributions . This paper combines ideas and techniques from two principal branches of explainable artificial intelligence : counterfactual explanations and robustness , in order to resolve the recourse problem under uncertainty . Concretely , our main contributions are the following : 1 . We propose the framework of Distributionally Robust Recourse Action ( DiRRAc ) for designing a recourse action that is robust to mixture shifts of the model parameters . Our DiRRAc maximizes the probability that the action is feasible with respect to a mixture shift of model parameters , while at the same time cap the action in the neighborhood of the input instance . Moreover , the DiRRAc model also hedges against the misspecification of the nominal distribution using a min-max form with a mixture ambiguity set prescribed by moment information . 2 . We reformulate the DiRRAc problem into a finite-dimensional optimization problem with an explicit objective function . We also provide a projected gradient descent to solve the resulting reformulation with convergence guarantees . 3 . We extend our DiRRAc framework along several axis to handle mixture weight uncertainty , to minimize the worst-case component probability of receiving unfavorable outcome , and also to inject the Gaussian parametric information . We first describe the recourse action problem with mixture shift in Section 2 . In Section 3 , we present our proposed DiRRAc framework , its reformulation and the numerical routine for solving it . The extension to the parametric Gaussian setting will be subsequently discussed in Section 4 . Section 5 reports the numerical experiments showing the benefits of the DiRRAc framework and its extensions . Notations . For each integer K , we have [ K ] = { 1 , . . . , K } . We use Sd+ ( Sd++ ) to denote the space of symmetric positive semidefinite ( definite , respectively ) matrices . For any A ∈ Rm×m , the trace operator is defined as Tr [ A ] = ∑d i=1Aii . We write Qk ∼ ( µk , Σk ) to denote that the distribution Qk has mean vector µk and covariance matrix Σk . If additionally Qk is Gaussian , we write Qk ∼ N ( µk , Σk ) . With a slight abuse of notation , Q ∼ ( Qk , pk ) k∈ [ K ] means Q is a mixture of K component distributions , the k-th component has weight pk and distribution Qk . 2 RECOURSE ACTION UNDER MIXTURE SHIFTS . We consider a binary classification setting with label Y = { 0 , 1 } , where 0 represents the unfavorable outcome while 1 denotes the favorable one . The covariate space is Rd , and any linear classifier Cθ : Rd → Y characterized by the d-dimensional parameter θ is of the form Cθ ( x ) = { 1 if θ > x ≥ 0 , 0 otherwise . Note that the bias term can be internalized into θ by adding an extra dimension , and thus it is omitted . Suppose that at this moment ( t = 0 ) , the current classifier is parametrized by θ0 , and we are given an input instance x0 ∈ Rd with unfavorable outcome , that is , Cθ0 ( x0 ) = 0 . One period of time from now ( t = 1 ) , the parameters of the predictive model will change stochastically , and are represented by a d-dimensional random vector θ̃ . This paper focuses on finding a recourse action x which is reasonably close to the instance x0 , and at the same time , has a high probability of receiving a favorable outcome in the future . Figure 1 gives a bird ’ s eye view of the setup To measure the closeness between the action x and the input x0 , we assume that the covariate space is endowed with a non-negative , continuous cost function c. In addition , suppose temporarily that θ̃ follows a distribution P̂ . Because maximizing the probability of the favorable outcome is equivalent to minimizing the probability of the unfavorable outcome , the recourse can be found by solving min x P̂ ( Cθ̃ ( x ) = 0 ) s. t. x ∈ X , c ( x , x0 ) ≤ δ . ( 1 ) The parameter δ ≥ 0 in ( 1 ) governs how far a recourse action can be from the input instance x0 . Note that we constrain x in a set X which captures operational constraints , for example , the highest education of a credit applicant should not be decreasing over time . In this paper , we model the random vector θ̃ using a finite mixture of distributions with K components , the mixture weights are p̂ satisfying ∑ k∈ [ K ] p̂k = 1 . Each component in the mixture represents one specific type of data shifts : the weights p̂ reflect the proportion of the shift types while the component distribution P̂k representing the ( conditional ) distribution of the future model parameters in the k-th shift . Further information on mixture distributions and their applications in machine learning can be found in Murphy ( 2012 , §3.5 ) . If each P̂k is a Gaussian distribution N ( θ̂k , Σ̂k ) , then P̂ is a mixture of Gaussian distributions . The objective of problem ( 1 ) can be expressed as P̂ ( Cθ̃ ( x ) = 0 ) = ∑ k∈ [ K ] p̂kP̂k ( Cθ̃ ( x ) = 0 ) = ∑ k∈ [ K ] p̂kΦ ( −x > θ̂k√ x > Σ̂kx ) , where the first equality follows from the law of conditional probability , and Φ is the cumulative distribution function of a standard Gaussian distribution . Under the Gaussian assumption , we can solve ( 1 ) using a projected gradient descent type of algorithm ( Boyd & Vandenberghe , 2004 ) . Remark 2.1 ( Nonlinear models ) . Our analysis focuses on linear classifiers , which is a common setup in the literature ( Upadhyay et al. , 2021 ; Ustun et al. , 2019 ; Rawal et al. , 2021 ; Karimi et al. , 2020 ; Wachter et al. , 2018 ; Ribeiro et al. , 2016 ) . To extend to nonlinear classifiers , we can follow a similar approach as in Rawal & Lakkaraju ( 2020 ) and Upadhyay et al . ( 2021 ) by first using LIME ( Ribeiro et al. , 2016 ) to approximate the nonlinear classifiers locally with an interpretable linear model , then subsequently applying our framework . | This paper studies the problem of recourse actions (a.k.a. counterfactual explanations) while considering data distribution shifts or model shifts. The proposed Distributionally Robust Recourse Action (DiRRAc) framework has the ability to generate valid recourse actions when model parameters shift over time. DiRRAc adopts the distributionally robust optimization technique and the paper proposes a projected gradient descent method to solve the optimization problem. Experiments are conducted with both synthetic and real world data, and the results have shown that DiRRAc methods can generate recourse actions with higher validity than two existing methods. | SP:a08c91391aee4c0a73b034f1e56c2852f0babd14 |
Revisiting Layer-wise Sampling in Fast Training for Graph Convolutional Networks | To accelerate the training of graph convolutional networks ( GCNs ) , many sampling-based methods have been developed for approximating the embedding aggregation . Among them , a layer-wise approach recursively performs importance sampling to select neighbors jointly for existing nodes in each layer . This paper revisits the approach from a matrix approximation perspective . We identify two issues in the existing layer-wise sampling methods : sub-optimal sampling probabilities and the approximation bias induced by sampling without replacement . We thus propose remedies to address these issues . The improvements are demonstrated by extensive analyses and experiments on common benchmarks . 1 INTRODUCTION . Graph Convolutional Networks ( Kipf & Welling , 2017 ) are popular methods for learning the representation of nodes . However , it is computationally challenging to train a GCN over large-scale graphs due to the inter-dependence of nodes in a graph . In the mini-batch training for an L-layer GCN , the computation of embeddings involves not only the batch nodes but also batch nodes ’ Lhop neighbors , which is known as the phenomenon of “ neighbor explosion ” ( Zeng et al. , 2019 ) or “ neighbor expansion ” ( Chen et al. , 2018a ; Huang et al. , 2018 ) . To alleviate such a computation issue for large graphs , sampling-based methods are proposed to accelerate the training and reduce the memory cost . These approaches can be categorized as node-wise sampling approaches ( Hamilton et al. , 2017 ; Chen et al. , 2018a ) , subgraph sampling approaches ( Zeng et al. , 2019 ; Chiang et al. , 2019 ; Cong et al. , 2020 ) , and layer-wise sampling approaches ( Chen et al. , 2018b ; Huang et al. , 2018 ; Zou et al. , 2019 ) . We focus on layer-wise sampling in this work , which enjoys the efficiency and variance reduction by sampling columns of renormalized Laplacian matrix in each layer . This paper is a study of the existing sampling schemes in layer-wise sampling methods . We identify two potential drawbacks in the common practice for layer-wise sampling ( Chen et al. , 2018b ; Zou et al. , 2019 ) . First , the current sampling probabilities used are sub-optimal since a core assumption in FastGCN and LADIES does not hold in many common graph benchmarks , such as Reddit ( Hamilton et al. , 2017 ) and OGB ( Hu et al. , 2020 ) . Secondly , the previous implementations of the layer-wise sampling methods slightly deviate from their theoretical results , and introduce bias in the estimation due to the usage of sampling without replacement . Realizing the two issues , we accordingly propose the remedies with new sampling probabilities and a debiasing algorithm . The improvements of the proposed methods are demonstrated by extensive experiments on evaluating the matrix approximation error and prediction accuracy on large-scale benchmarks along with some theoretical analyses . To the best of our knowledge , our result is the first to recognize and resolve the issues with the default assumption and the practical implementation of layer-wise sampling methods for GCN . Once these sub-optimal practices are addressed , we observe that the GCN models consistently converge faster in training and usually enjoy a higher prediction accuracy . We believe the proposed methods can more generally improve the training for GCN as well , e.g. , the same strategy can allow node-wise sampling methods to adopt sampling without replacement and further improve the approximation accuracy . Moreover , our discussion on the bias induced by sampling without replacement is not limited to GCN , and the debiasing algorithm we develop can contribute to other sampling-based machine learning models beyond layer-wise sampling . 1.1 BACKGROUND AND RELATED WORK . GCN Graph Convolutional Networks ( GCNs , Kipf & Welling ( 2017 ) ) effectively incorporate the technique of convolution filter into the graph domain ( Wu et al. , 2020 ; Bronstein et al. , 2017 ) . Viewed as an approximation for the spectral graph convolutions ( Bruna et al. , 2014 ; Defferrard et al. , 2016 ) , GCN has achieved great success in learning tasks such as node classification and link prediction , with applications ranging from recommender systems ( Ying et al. , 2018 ) , traffic prediction ( Cui et al. , 2019 ; Rahimi et al. , 2018 ) , and knowledge graphs ( Schlichtkrull et al. , 2018 ) . Sampling Based GCN Training To name a few of sampling schemes , GraphSAGE ( Hamilton et al. , 2017 ) first introduces the “ node-wise ” neighbor sampling scheme , where a fixed number of neighbors are uniformly and independently sampled for each node in every layer . To reduce variance in node-wise sampling training , VR-GCN ( Chen et al. , 2018a ) applies a control variate approach with historical activation . Instead of sampling for each node , “ layer-wise ” sampling is a more efficient approach : joint sampling scheme for all the existing nodes in each layer so that these nodes can share the sampled neighboring node . FastGCN ( Chen et al. , 2018b ) first introduces this scheme with importance sampling . AS-GCN ( Huang et al. , 2018 ) proposes an alternative sampling probability for layer-wise sampling by approximating the hidden layer in the sampling procedure . Then Zou et al . ( 2019 ) propose a layer-dependent importance sampling scheme ( LADIES ) to further reduce the variance in training . This alleviates the issue of empty rows in sampled adjacency matrix for FastGCN . For the “ subgraph ” approach , ClusterGCN ( Chiang et al. , 2019 ) samples a dense subgraph associated with the batch nodes by graph clustering algorithm ; GraphSAINT ( Zeng et al. , 2019 ) introduces normalization and variance reduction in subgraph sampling . 2 NOTATIONS AND PRELIMINARIES . 2.1 GRAPH CONVOLUTIONAL NETWORKS . The GCN architecture for semi-supervised node classification is introduced by Kipf & Welling ( 2017 ) . Suppose we have an undirected graph G = ( V , E ) , where V is the set of n nodes and E is the set of E edges . Denote node i in V as vi , where i ∈ [ n ] is the index of nodes in the graph and [ n ] denotes the set { 1 , 2 , ... , n } . Each node vi ∈ V is associated with a feature vector xi ∈ Rp and a label vector yi ∈ Rq . Though we can observe the feature of every node in V and every edge in E , i.e . the n × n adjacency matrix A , we are only able to observe the label of partial nodes Vtrain , satisfying Vtrain ⊂ V . Thus , we need to predict the labels for the rest nodes in V\Vtrain and it becomes a semi-supervised learning task . A graph convolution layer is defined as : Z ( l+1 ) = PH ( l ) W ( l ) , H ( l ) = σ ( Z ( l ) ) , ( 1 ) where σ is an activation function and P is obtained from applying normalization to the graph adjacency matrix A ; H ( l ) is the embedding matrix of the graph nodes in the l-th layer , and W ( l ) is the parameter matrix of the same layer . In particular , H ( 0 ) is the n × p feature matrix . For mini-batch training , the training loss for an L-layer GCN is defined as 1|Vbatch| ∑ vi∈Vbatch ` ( yi , z ( L ) i ) , where ` is the loss function , batch nodes Vbatch is a subset of Vtrain at each iteration . z ( L ) i is the i-th row in Z ( L ) , | · | denotes the cardinality of a set . In this paper , we set P = D̃−1/2 ( A + I ) D̃−1/2 , where D̃ is a diagonal matrix with Dii = 1 + ∑ iAij . The matrix P is constructed as a renormalized Laplacian matrix to help alleviate overfitting and exploding/vanishing gradients issues ( Kipf & Welling , 2017 ) , which is used by Kipf & Welling ( 2017 ) ; Chen et al . ( 2018a ) ; Cong et al . ( 2020 ) . 2.2 LAYER-WISE SAMPLING . To address “ neighbor explosion ” issue for graph neural networks , sampling methods are integrated into the stochastic training . Motivated by the idea to approximate the matrix PH ( l ) in ( 1 ) , FastGCN ( Chen et al. , 2018b ) applies an importance-sampling-based strategy . Instead of individually sampling neighbors for each node in the l-th layer , they sample a set of s neighbors S ( l ) from V with importance sampling probability pi , where pi ∝ ∑n j=1P 2 ji and ∑ i pi = 1 . For the ( l− 1 ) -th layer , they naturally set V ( l−1 ) = S ( l ) . LADIES ( Zou et al. , 2019 ) improves the importance sampling probability pi as p ( l ) i ∝ ∑ vj∈N ( l ) P 2 ji , ∀i ∈ [ n ] ( 2 ) where N ( l ) = ∪vi∈V ( l ) N ( vi ) and ∑ j p ( l ) j = 1 . In this case , S ( l ) the nodes sampled for the l-th layer are guaranteed to be within the neighborhood of V ( l ) . The whole procedure can be concluded by a diagonal matrix S ( l ) ∈ Rn×n and a row selection matrixQ ( l ) ∈ Rsl×n , which are defined as Q ( l ) k , j = { 1 , j = i ( l ) k 0 , else , S ( l ) j , j = { ( slp ( l ) i ( l ) k ) −1 , j = i ( l ) k 0 , else , where { i ( l ) k } sl k=1 are the indices of rows selected in the l-th layer . The forward propagation with layer-wise sampling can thus be equivalently represented as Z̃ ( l+1 ) = Q ( l+1 ) PS ( l ) H ( l ) W ( l ) , H ( l ) = ( Q ( l ) ) Tσ ( Z̃ ( l ) ) , where Z̃ ( l+1 ) is the approximation of the embedding matrix for layer l . 3 EXPERIMENTAL SETUP . In advance of the formal introduction to the the issues and the corresponding remedies in Section 4 and Section 5 , we state the basic setups of the main experiments and datasets as they appear multiple times across the paper . Details about GCN model training are deferred to the according sections . Main experiments . To study the influence of the aforementioned issues we evaluate the matrix approximation error ( c.f . Figure 1 ) of different methods and consider it as a new metric to reflect the performance of the sampling strategy on approximating the original mini-batch training in one-step propagation . Since the updates of parameters in the training is not involved in the simple metric above , in Section 6 we further evaluate the prediction accuracy on testing sets of both intermediate models during training and final outputs , using the metrics in Table 2 . Benchmarks . Empirical experiments are conducted on 5 datasets ( see details at Table 2 in Appendix B ) : Reddit ( Hamilton et al. , 2017 ) , ogbn-arxiv , ogbn-proteins , ogbn-mag and ogbn-products ( Hu et al. , 2020 ) . Reddit is a traditional large graph dataset used by Chen et al . ( 2018b ) ; Zou et al . ( 2019 ) ; Chen et al . ( 2018a ) ; Cong et al . ( 2020 ) ; Zeng et al . ( 2019 ) . Ogbn-arxiv , ogbn-proteins and ogbn-products are Open Graph Benchmarks ( OGB ) proposed by Hu et al . ( 2020 ) . Compared to traditional datasets , our selected OGB data have larger volume ( up to million-node scale ) with more challenging data split . The metrics in Table 2 follow the choices of recent works and the recommendation by ( Hu et al. , 2020 ) . | This paper studies existing sampling schemes employed in training graph neural network architectures (e.g., FastGCN [Chen et al.] and LADIES [Zon et al.]) that are improvements to graph convolutional networks (GCN) [Kipf & Welling 2017] from a matrix approximation perspective (looking a Frobenius norm). The paper focuses on layer-wise sampling (cf. node and subgraph sampling) and observes that there are two drawbacks in common practice layer-wise sampling. First drawback is current probabilities (probability distributions) used for sampling are sub-optimal. The reason the paper gives is that a core assumption made by certain GCN schemes such as FastGCN and LADIES do not hold in datasets such as Reddit and OGB. Second drawback is the implementations of these schemes slightly deviate from their respective theoretical results. The implementations use sampling without replacement, and thus introduces bias. To this, the paper presents new sampling probabilities (uniform proposal distribution), and an algorithm to reduce the bias. With these adjustments, training of GCN converges faster and potentially leads to higher prediction accuracy (for node prediction). Theoretical analysis is presented and experiments are conducted on common benchmarks (Reddit, ogbn-{arxiv,proteins,products,mag}). | SP:f4db6a77d6afe5ccbefcdd0f7550dc7669843f58 |
Revisiting Layer-wise Sampling in Fast Training for Graph Convolutional Networks | To accelerate the training of graph convolutional networks ( GCNs ) , many sampling-based methods have been developed for approximating the embedding aggregation . Among them , a layer-wise approach recursively performs importance sampling to select neighbors jointly for existing nodes in each layer . This paper revisits the approach from a matrix approximation perspective . We identify two issues in the existing layer-wise sampling methods : sub-optimal sampling probabilities and the approximation bias induced by sampling without replacement . We thus propose remedies to address these issues . The improvements are demonstrated by extensive analyses and experiments on common benchmarks . 1 INTRODUCTION . Graph Convolutional Networks ( Kipf & Welling , 2017 ) are popular methods for learning the representation of nodes . However , it is computationally challenging to train a GCN over large-scale graphs due to the inter-dependence of nodes in a graph . In the mini-batch training for an L-layer GCN , the computation of embeddings involves not only the batch nodes but also batch nodes ’ Lhop neighbors , which is known as the phenomenon of “ neighbor explosion ” ( Zeng et al. , 2019 ) or “ neighbor expansion ” ( Chen et al. , 2018a ; Huang et al. , 2018 ) . To alleviate such a computation issue for large graphs , sampling-based methods are proposed to accelerate the training and reduce the memory cost . These approaches can be categorized as node-wise sampling approaches ( Hamilton et al. , 2017 ; Chen et al. , 2018a ) , subgraph sampling approaches ( Zeng et al. , 2019 ; Chiang et al. , 2019 ; Cong et al. , 2020 ) , and layer-wise sampling approaches ( Chen et al. , 2018b ; Huang et al. , 2018 ; Zou et al. , 2019 ) . We focus on layer-wise sampling in this work , which enjoys the efficiency and variance reduction by sampling columns of renormalized Laplacian matrix in each layer . This paper is a study of the existing sampling schemes in layer-wise sampling methods . We identify two potential drawbacks in the common practice for layer-wise sampling ( Chen et al. , 2018b ; Zou et al. , 2019 ) . First , the current sampling probabilities used are sub-optimal since a core assumption in FastGCN and LADIES does not hold in many common graph benchmarks , such as Reddit ( Hamilton et al. , 2017 ) and OGB ( Hu et al. , 2020 ) . Secondly , the previous implementations of the layer-wise sampling methods slightly deviate from their theoretical results , and introduce bias in the estimation due to the usage of sampling without replacement . Realizing the two issues , we accordingly propose the remedies with new sampling probabilities and a debiasing algorithm . The improvements of the proposed methods are demonstrated by extensive experiments on evaluating the matrix approximation error and prediction accuracy on large-scale benchmarks along with some theoretical analyses . To the best of our knowledge , our result is the first to recognize and resolve the issues with the default assumption and the practical implementation of layer-wise sampling methods for GCN . Once these sub-optimal practices are addressed , we observe that the GCN models consistently converge faster in training and usually enjoy a higher prediction accuracy . We believe the proposed methods can more generally improve the training for GCN as well , e.g. , the same strategy can allow node-wise sampling methods to adopt sampling without replacement and further improve the approximation accuracy . Moreover , our discussion on the bias induced by sampling without replacement is not limited to GCN , and the debiasing algorithm we develop can contribute to other sampling-based machine learning models beyond layer-wise sampling . 1.1 BACKGROUND AND RELATED WORK . GCN Graph Convolutional Networks ( GCNs , Kipf & Welling ( 2017 ) ) effectively incorporate the technique of convolution filter into the graph domain ( Wu et al. , 2020 ; Bronstein et al. , 2017 ) . Viewed as an approximation for the spectral graph convolutions ( Bruna et al. , 2014 ; Defferrard et al. , 2016 ) , GCN has achieved great success in learning tasks such as node classification and link prediction , with applications ranging from recommender systems ( Ying et al. , 2018 ) , traffic prediction ( Cui et al. , 2019 ; Rahimi et al. , 2018 ) , and knowledge graphs ( Schlichtkrull et al. , 2018 ) . Sampling Based GCN Training To name a few of sampling schemes , GraphSAGE ( Hamilton et al. , 2017 ) first introduces the “ node-wise ” neighbor sampling scheme , where a fixed number of neighbors are uniformly and independently sampled for each node in every layer . To reduce variance in node-wise sampling training , VR-GCN ( Chen et al. , 2018a ) applies a control variate approach with historical activation . Instead of sampling for each node , “ layer-wise ” sampling is a more efficient approach : joint sampling scheme for all the existing nodes in each layer so that these nodes can share the sampled neighboring node . FastGCN ( Chen et al. , 2018b ) first introduces this scheme with importance sampling . AS-GCN ( Huang et al. , 2018 ) proposes an alternative sampling probability for layer-wise sampling by approximating the hidden layer in the sampling procedure . Then Zou et al . ( 2019 ) propose a layer-dependent importance sampling scheme ( LADIES ) to further reduce the variance in training . This alleviates the issue of empty rows in sampled adjacency matrix for FastGCN . For the “ subgraph ” approach , ClusterGCN ( Chiang et al. , 2019 ) samples a dense subgraph associated with the batch nodes by graph clustering algorithm ; GraphSAINT ( Zeng et al. , 2019 ) introduces normalization and variance reduction in subgraph sampling . 2 NOTATIONS AND PRELIMINARIES . 2.1 GRAPH CONVOLUTIONAL NETWORKS . The GCN architecture for semi-supervised node classification is introduced by Kipf & Welling ( 2017 ) . Suppose we have an undirected graph G = ( V , E ) , where V is the set of n nodes and E is the set of E edges . Denote node i in V as vi , where i ∈ [ n ] is the index of nodes in the graph and [ n ] denotes the set { 1 , 2 , ... , n } . Each node vi ∈ V is associated with a feature vector xi ∈ Rp and a label vector yi ∈ Rq . Though we can observe the feature of every node in V and every edge in E , i.e . the n × n adjacency matrix A , we are only able to observe the label of partial nodes Vtrain , satisfying Vtrain ⊂ V . Thus , we need to predict the labels for the rest nodes in V\Vtrain and it becomes a semi-supervised learning task . A graph convolution layer is defined as : Z ( l+1 ) = PH ( l ) W ( l ) , H ( l ) = σ ( Z ( l ) ) , ( 1 ) where σ is an activation function and P is obtained from applying normalization to the graph adjacency matrix A ; H ( l ) is the embedding matrix of the graph nodes in the l-th layer , and W ( l ) is the parameter matrix of the same layer . In particular , H ( 0 ) is the n × p feature matrix . For mini-batch training , the training loss for an L-layer GCN is defined as 1|Vbatch| ∑ vi∈Vbatch ` ( yi , z ( L ) i ) , where ` is the loss function , batch nodes Vbatch is a subset of Vtrain at each iteration . z ( L ) i is the i-th row in Z ( L ) , | · | denotes the cardinality of a set . In this paper , we set P = D̃−1/2 ( A + I ) D̃−1/2 , where D̃ is a diagonal matrix with Dii = 1 + ∑ iAij . The matrix P is constructed as a renormalized Laplacian matrix to help alleviate overfitting and exploding/vanishing gradients issues ( Kipf & Welling , 2017 ) , which is used by Kipf & Welling ( 2017 ) ; Chen et al . ( 2018a ) ; Cong et al . ( 2020 ) . 2.2 LAYER-WISE SAMPLING . To address “ neighbor explosion ” issue for graph neural networks , sampling methods are integrated into the stochastic training . Motivated by the idea to approximate the matrix PH ( l ) in ( 1 ) , FastGCN ( Chen et al. , 2018b ) applies an importance-sampling-based strategy . Instead of individually sampling neighbors for each node in the l-th layer , they sample a set of s neighbors S ( l ) from V with importance sampling probability pi , where pi ∝ ∑n j=1P 2 ji and ∑ i pi = 1 . For the ( l− 1 ) -th layer , they naturally set V ( l−1 ) = S ( l ) . LADIES ( Zou et al. , 2019 ) improves the importance sampling probability pi as p ( l ) i ∝ ∑ vj∈N ( l ) P 2 ji , ∀i ∈ [ n ] ( 2 ) where N ( l ) = ∪vi∈V ( l ) N ( vi ) and ∑ j p ( l ) j = 1 . In this case , S ( l ) the nodes sampled for the l-th layer are guaranteed to be within the neighborhood of V ( l ) . The whole procedure can be concluded by a diagonal matrix S ( l ) ∈ Rn×n and a row selection matrixQ ( l ) ∈ Rsl×n , which are defined as Q ( l ) k , j = { 1 , j = i ( l ) k 0 , else , S ( l ) j , j = { ( slp ( l ) i ( l ) k ) −1 , j = i ( l ) k 0 , else , where { i ( l ) k } sl k=1 are the indices of rows selected in the l-th layer . The forward propagation with layer-wise sampling can thus be equivalently represented as Z̃ ( l+1 ) = Q ( l+1 ) PS ( l ) H ( l ) W ( l ) , H ( l ) = ( Q ( l ) ) Tσ ( Z̃ ( l ) ) , where Z̃ ( l+1 ) is the approximation of the embedding matrix for layer l . 3 EXPERIMENTAL SETUP . In advance of the formal introduction to the the issues and the corresponding remedies in Section 4 and Section 5 , we state the basic setups of the main experiments and datasets as they appear multiple times across the paper . Details about GCN model training are deferred to the according sections . Main experiments . To study the influence of the aforementioned issues we evaluate the matrix approximation error ( c.f . Figure 1 ) of different methods and consider it as a new metric to reflect the performance of the sampling strategy on approximating the original mini-batch training in one-step propagation . Since the updates of parameters in the training is not involved in the simple metric above , in Section 6 we further evaluate the prediction accuracy on testing sets of both intermediate models during training and final outputs , using the metrics in Table 2 . Benchmarks . Empirical experiments are conducted on 5 datasets ( see details at Table 2 in Appendix B ) : Reddit ( Hamilton et al. , 2017 ) , ogbn-arxiv , ogbn-proteins , ogbn-mag and ogbn-products ( Hu et al. , 2020 ) . Reddit is a traditional large graph dataset used by Chen et al . ( 2018b ) ; Zou et al . ( 2019 ) ; Chen et al . ( 2018a ) ; Cong et al . ( 2020 ) ; Zeng et al . ( 2019 ) . Ogbn-arxiv , ogbn-proteins and ogbn-products are Open Graph Benchmarks ( OGB ) proposed by Hu et al . ( 2020 ) . Compared to traditional datasets , our selected OGB data have larger volume ( up to million-node scale ) with more challenging data split . The metrics in Table 2 follow the choices of recent works and the recommendation by ( Hu et al. , 2020 ) . | This paper revisits layer-wise sampling methods of graph neural networks by addressing issues such as sub-optimal sampling probabilities and the approximation bias due to the usage of sampling without replacement. This paper also presents a new metric to evaluate the performance of the sampling strategy. Through comprehensive experiments, the authors validate the effectiveness of their proposed methods. | SP:f4db6a77d6afe5ccbefcdd0f7550dc7669843f58 |
On the interventional consistency of autoencoders | 1 INTRODUCTION . Representation learning is the problem of finding a low dimensional description of the data . The characteristics of a ’ good representation ’ have long since been a matter of debate , often depending on the context in which the representation has to be employed . Multiple works ( cf . Bengio et al . ( 2013 ) , Schölkopf et al . ( 2021 ) ) identify modularity , robustness to distribution shifts and interpretability as crucial features of a ’ good representation ’ , motivating the quest toward disentanglement ( Bachman et al . ( 2019 ) , Locatello et al . ( 2020 ) , Ridgeway & Mozer ( 2018 ) ) and causal representations ( Suter et al . ( 2019 ) , Träuble et al . ( 2021 ) ) . The insight shared by both these fields is that complex world phenomena arise from the ’ rich interaction of many sources ’ , and thus enable compact descriptions in terms of the basic components participating in their generative process . More formally , the hypothesis is that there exist a set of semantically meaningful variables S and an arbitrarily non-linear function G such that X = G ( S1 , ... , Sn ) , where X are the observations . Autoencoders have played a crucial role in the field of representation learning since its inception . Their success is largely due to their simplicity and surprising effectiveness . Mathematically autoencoders can be represented as the tuple ( E : Rd → Rn , D : Rn → Rd ) . Capacity constraints ( e.g . n d ) force the latent space to prioritize certain information in the input , thus yielding a useful representation . Additional requirements may be imposed either structurally or through regularisation . Intuitively , consistency means that an input generated by the decoder can be mapped back to the point in the latent space that produced it . The central idea of Cemgil et al . ( 2020 ) is making the encoder and the decoder consistent both on the training data P ( X ) and on the auxiliary observations generated by the decoder P ( X ′ ) . Our contribution . Unsupervised causal representation learning is largely unsolved today . In a recent paper Locatello et al . ( 2019 ) prove that ’ the unsupervised learning of disentangled representations is fundamentally impossible without inductive biases on both the models and the data ’ . Motivated by this result , in this work we , investigate the structure and in particular the self-consistency of the learned autoencoder , rather than focusing on the representation ’ s relation to the ” groundtruth ” factors . In this context , we formulate an inductive bias for autoencoders , that we call interventional consistency , and we link it to the solution space of the disentanglement problem . Additionally , we introduce a new architectural module for the latent space of autoencoders which unifies the disentanglement objective and the , more structured , causal perspective . 2 METHOD Consider the ’ artificial ’ generative process implemented by the decoder as X ∼ P ( Z ) P ( X|Z ; θ ) . We express the latent space dynamics in terms of a Structural Causal Model ( SCM ) ( Pearl , 2009 ) on Z . An SCM is defined by a set of so-called noise variables N = N1 , ... , Nn , with a distribution P ( N ) that factorises , and a set of structural assignments f1 , ... , fn of the form : Zi : = fi ( PAi , Ni ) for i = 1 , ... , n , ( 1 ) where PAi refers to the set of direct causes of the variable Zi . The set of directed interactions between causal variables identifies a graph , which is usually assumed to be acyclic ( i.e . a DAG ) . This formulation naturally entails a distribution P ( Z ) and a corresponding causal factorisation : P ( Z ) = ∏ i P ( Zi|PAi ) . ( 2 ) For a thorough introduction to SCM we refer the reader to Peters et al . ( 2017 ) . We can now apply results from causality to the artificial generative process . In particular , we turn to the ICM principle ( Peters et al. , 2017 ) , which states that the modules participating in the generative process are independent and can thus be separately manipulated . Let us refer to the composition R = E ◦ D as the response map , as done in Leeb et al . ( 2021 ) , mapping the latent space Z ⊆ Rn back to itself , i.e . R : Z → Z . Let R ( Z ′|Z ) be the posterior distribution entailed by the map , with Z , Z ′ being distinct random variables . Furthermore , consider the response aggregate posterior distribution and any factorisation thereof : P ( Z ′ ) = ∫ P ( Z ) R ( Z ′|Z ) dZ = ∏ i P ( Z ′i|Z ′j 6=i ) . The implications of the ICM principle are trivial if applied to the prior P ( Z ) : modifying P ( Zi|PAi ) will not affect P ( Zj |PAj ) for j 6= i . However the consequences get far from trivial if we require the response map to preserve the validity of the principle . Let P̃ ( Z ) denote the intervention distribution : P̃ ( Z ) = P̃ ( Zm|P̃Am ) ∏ i 6=m P ( Zi|PAi ) . ( 3 ) Then we say that the ICM principle is preserved by the response map if : P̃ ( Z ′ ) = ∫ P̃ ( Z ) R ( Z ′|Z ) dZ = P̃ ( Z ′k|Zj 6=k ) ∏ i 6=k P ( Z ′i|Z ′j 6=i ) , ( 4 ) where the indicesm and k are not necessarily equivalent , although we will from here on assume their equivalence.1 Intuitively , R preserves the ICM principle if any localised change in P ( Z ) produces a corresponding localised change in P ( Z ′ ) . The idea is illustrated in Figure 1 . Crucially , this property mirrors the definition of disentanglement in the context of the artificial generative process : each factor in the representation ( P ( Z ′k|Z ′j 6=k ) ) is sensitive to changes in a single generative factor ( P ( Zm|PAm ) ) , while being relatively invariant to changes elsewhere in the generative process . This 1Allowing m 6= k essentially takes into account the possibility of permutation of the original space dimensions in the representation , which is the norm in disentanglement metrics . By assuming m = k we get rid of one degree of freedom in the problem . analogy suggests that the condition expressed in Equation 4 has to be satisfied by a representation that disentangles the true generative process . We call this condition interventional consistency . Interventional consistency can be formulated in terms of invariance and equivariance of the response map . If the autoencoder satisfies interventional consistency , then P ( Z ′i|Z ′j < i ) is invariant to the action of the group of interventions Im localised on P ( Zm|Zj < m ) with i 6= m. Moreover , assuming we are able to perform an intervention on the response variables I ′ equivalent to the one performed on the prior I , the response map is equivariant to the intervention , meaning that the order in which the response and the intervention are applied is not important . From this symmetry viewpoint , interventional consistency belongs to causal representations : ’ the right causal order is the one invariant to the right kind of interventions ’ ( Herb Simon , cf . Hoover ( 2008 ) ) . Importantly , a representation that satisfies the interventional consistency condition does not necessarily solve the identifiability problem ( Gresele et al. , 2021 ) , as any bijective transformation of the true causal factors would equally satisfy the condition for the correspondingly transformed interventions . Instead , it is manipulable by construction . Without this feature , there is no guarantee the decoder can extrapolate to latent samples not seen during training . As an example , consider the common diagnostic tool of latent traversals : it consists of atomic interventions applied to the different dimensions of the representation . Often a good representation is associated with semantically meaningful traversals in the image space . However , the observed result is only half of the story : if the encoder fails to interpret the generated samples the representation can not be called disentangled . To illustrate this example , in Figure 2 we show the error recorded on the response to traversals on a standard autoencoder . 2.1 CONSISTENCY TRAINING . It is possible to use the invariance or equivariance formulation to incorporate interventional consistency into the solution space of an autoencoding problem . Let LINV ( R ) and LEQV ( R ) be real valued functions measuring the violation of the invariance and equivariance property by the response map R , respectively . Then the space of interventionally consistent autoencoders is defined as : { ( E , D ) : LINV ( E ◦D ) = 0 or LEQV ( E ◦D ) = 0 } . This hard constraint is replaced with a differentiable regularization by augmenting the optimization objective with LINV or LEQV . Next , we propose evaluation metrics that make the consistency penalties easier to interpret . Algorithm 1 and 2 in the Appendix demonstrate how to compute LINV and LEQV . Importantly , LINV is independent of the network architecture of choice , therefore it can be applied to any existing autoencoder method with latent space N . Meanwhile , LEQV only requires an explicit distinction between noises N and causes Z ( of the artificial generative process ) outlined in the previous section , which , in our experiments , is implemented by the explicit causal latent block ( see 4 ) . Note some additional mild approximations are necessary to make the penalty terms tractable . Firstly , we assume that the response aggregate posterior P ( Z ′ ) factorises like P ( Z ) , i.e . that the statistical dependencies in the prior are preserved by the response map . When using an autoencoder with explicit structural mappings this assumption is equivalent to assuming that the response aggregate noise posterior P ( N ′ ) factorises . Secondly , we employ Monte Carlo sampling to estimate the consistency by sampling hard interventions on the noise variable distribution of the form : P̃ ( Nm ) ← δ ( v ) , with v from the aggregate marginal posterior Q ( Nm ) = EXQ ( Nm|X ) . Finally , in order to assess equivariance we treat I ′ = I , which relies on the equivalence between Z and Z ′ . This assumption is satisfied as long as the reconstructions are sufficiently similar to corresponding the training samples ( i.e . the autoencoder fidelity is high ) . Below , Nk denotes a sample from the prior P ( N ) and Ñk its intervened version . Moreover we let N̄ ′k and ¯̃Nk′ be the expectation of the response to Nk and Ñk , respectively . Let ˆ̄Nk′ be the result of the intervention I applied to N̄ ′k and we denote the corresponding causal variables by ˆ̄Z ′k and ¯̃Zk . Invariance score . Let UIm be a measure of the magnitude of the intervention Im and σi the standard deviation of each dimension in the response space . Then we define the invariance error in response to the intervention Im recorded on the dimension i as eIm , i = 1 UImσi EN [ || ( N̄ ′ − ¯̃N ′Im ) i||2 ] . More precisely we define UIm : = EN [ || ( N − ÑIm ) i||2 ] . Consequently we score the invariance of the i-th dimension with respect to interventions on the m-th dimension as : INV [ m , i ] = 1− EIm∼Im [ eIm , i ] . ( 5 ) We approximate all the expectations with their Monte Carlo estimates . According to this formulation , a perfectly invariant response map would score 1 on the off-diagonal elements of the matrix . Notice that invariance is not concerned with the diagonal entries of the matrix . However , we can interpret a high invariance score on the i-th diagonal element as a sign of the unresponsiveness of the dimension as it implies interventions on i are not registered in the response . We henceforth refer to these unresponsive latent dimensions with the appellative ” collapsed ” , in reference to the well-known effect of posterior collapse ( rather common in βVAE models ) . Equivariance score . Similarly , we define the equivariance error and equivariance score as : rIm , i = 1 U EN [ || ( ˆ̄Z ′Im − ¯̃Z ′Im ) i||2 ] EQV [ m , i ] = 1− EIm∼Im [ rIm , i ] . In the case of equivariance perfect consistency corresponds to all the entries being 1 . Notice however that high equivariance is also obtained by the trivial solution ( i.e . a constant encoding map ) . This is due to the fact that the intervention values are sampled from the aggregate posterior . Self-consistency score . Finally , the self-consistency score evaluates the response inner consistency under no intervention . More specifically , we measure the amount of error introduced by the response on average over the prior . In formula : SCN [ i ] = 1− 1 σi EN [ || ( N̄ ′ −N ) i||2 ] Importantly , self-consistency is the only score relating the prior and the response space and is closely related to the regularization used by AVAEs ( Cemgil et al. , 2020 ) . 2.2 EXPLICIT CAUSAL LATENT BLOCK Inspired by the distinction between noise and structure in the SCM we insert a similar bias into the latent space of autoencoders by directly joining the representation layer with a structural mixing block as shown in Figure 4 . We henceforth will refer to the botteneck layer as noise terms or N , and to the units produced by the subsequent mixing layer as causes orZ . EachZi is obtained from its corresponding noise term Ni and its predecessors Z < i through a learned nonlinear function fi , much like in an SCM ( implemented as an MLP ) . To increase the sparsity of the structure we apply a learnable mask Mi ∈ R ( i−1 ) to the parents , such that Zi = fi ( Mi Z < i−1 , Ni ) . The masks are parametrised through the Gumbel trick ( Jang et al. , 2016 ) to induce weight sharpening , and thus act as a gating mechanisms of the information provided by the predecessors . We name this extension of the representation layer ” explicit causal latent block ” ( XBlock ) and we call XNet ( or prefix ” X ” ) any network using this module . Most disentanglement methods attempt to learn a representation consisting of causal variables or disentangled ” factors ” which are statistically independent ( Higgins et al. , 2017a ) . However , except in the trivial case , it is not the Zi that should be treated as statistically independent , but the Ni . Consequently , the explicit separation between noise and structure reconciles with the conventional disentanglement perspective while also providing the model with additional power to learn identifiable relationships between the latent variables terms . The causal links revealed by Mi can then be used to help identify how interventions will affect the resulting samples for downstream tasks such as controllable generation . Moreover , differentiating between noise and structure allows to expand the dimensionality of the causal variables Z , without any need to reparametrise the noise distribution . We partition the causes in units , each one consisting of multiple dimensions and representing a single statistical variable . | The paper proposes to use interventional consistency to regularise representation learning in VAEs. The idea is well-motivated by the ICM principle and theoretically justified. The paper suggests to use interventional consistency for both training and evaluation of the representation learnt by VAEs. Results show that the proposed idea can give more modular and interpretable representation. | SP:5ee804c8b2609b7a2adea1b39361b0b55b49b7bd |
On the interventional consistency of autoencoders | 1 INTRODUCTION . Representation learning is the problem of finding a low dimensional description of the data . The characteristics of a ’ good representation ’ have long since been a matter of debate , often depending on the context in which the representation has to be employed . Multiple works ( cf . Bengio et al . ( 2013 ) , Schölkopf et al . ( 2021 ) ) identify modularity , robustness to distribution shifts and interpretability as crucial features of a ’ good representation ’ , motivating the quest toward disentanglement ( Bachman et al . ( 2019 ) , Locatello et al . ( 2020 ) , Ridgeway & Mozer ( 2018 ) ) and causal representations ( Suter et al . ( 2019 ) , Träuble et al . ( 2021 ) ) . The insight shared by both these fields is that complex world phenomena arise from the ’ rich interaction of many sources ’ , and thus enable compact descriptions in terms of the basic components participating in their generative process . More formally , the hypothesis is that there exist a set of semantically meaningful variables S and an arbitrarily non-linear function G such that X = G ( S1 , ... , Sn ) , where X are the observations . Autoencoders have played a crucial role in the field of representation learning since its inception . Their success is largely due to their simplicity and surprising effectiveness . Mathematically autoencoders can be represented as the tuple ( E : Rd → Rn , D : Rn → Rd ) . Capacity constraints ( e.g . n d ) force the latent space to prioritize certain information in the input , thus yielding a useful representation . Additional requirements may be imposed either structurally or through regularisation . Intuitively , consistency means that an input generated by the decoder can be mapped back to the point in the latent space that produced it . The central idea of Cemgil et al . ( 2020 ) is making the encoder and the decoder consistent both on the training data P ( X ) and on the auxiliary observations generated by the decoder P ( X ′ ) . Our contribution . Unsupervised causal representation learning is largely unsolved today . In a recent paper Locatello et al . ( 2019 ) prove that ’ the unsupervised learning of disentangled representations is fundamentally impossible without inductive biases on both the models and the data ’ . Motivated by this result , in this work we , investigate the structure and in particular the self-consistency of the learned autoencoder , rather than focusing on the representation ’ s relation to the ” groundtruth ” factors . In this context , we formulate an inductive bias for autoencoders , that we call interventional consistency , and we link it to the solution space of the disentanglement problem . Additionally , we introduce a new architectural module for the latent space of autoencoders which unifies the disentanglement objective and the , more structured , causal perspective . 2 METHOD Consider the ’ artificial ’ generative process implemented by the decoder as X ∼ P ( Z ) P ( X|Z ; θ ) . We express the latent space dynamics in terms of a Structural Causal Model ( SCM ) ( Pearl , 2009 ) on Z . An SCM is defined by a set of so-called noise variables N = N1 , ... , Nn , with a distribution P ( N ) that factorises , and a set of structural assignments f1 , ... , fn of the form : Zi : = fi ( PAi , Ni ) for i = 1 , ... , n , ( 1 ) where PAi refers to the set of direct causes of the variable Zi . The set of directed interactions between causal variables identifies a graph , which is usually assumed to be acyclic ( i.e . a DAG ) . This formulation naturally entails a distribution P ( Z ) and a corresponding causal factorisation : P ( Z ) = ∏ i P ( Zi|PAi ) . ( 2 ) For a thorough introduction to SCM we refer the reader to Peters et al . ( 2017 ) . We can now apply results from causality to the artificial generative process . In particular , we turn to the ICM principle ( Peters et al. , 2017 ) , which states that the modules participating in the generative process are independent and can thus be separately manipulated . Let us refer to the composition R = E ◦ D as the response map , as done in Leeb et al . ( 2021 ) , mapping the latent space Z ⊆ Rn back to itself , i.e . R : Z → Z . Let R ( Z ′|Z ) be the posterior distribution entailed by the map , with Z , Z ′ being distinct random variables . Furthermore , consider the response aggregate posterior distribution and any factorisation thereof : P ( Z ′ ) = ∫ P ( Z ) R ( Z ′|Z ) dZ = ∏ i P ( Z ′i|Z ′j 6=i ) . The implications of the ICM principle are trivial if applied to the prior P ( Z ) : modifying P ( Zi|PAi ) will not affect P ( Zj |PAj ) for j 6= i . However the consequences get far from trivial if we require the response map to preserve the validity of the principle . Let P̃ ( Z ) denote the intervention distribution : P̃ ( Z ) = P̃ ( Zm|P̃Am ) ∏ i 6=m P ( Zi|PAi ) . ( 3 ) Then we say that the ICM principle is preserved by the response map if : P̃ ( Z ′ ) = ∫ P̃ ( Z ) R ( Z ′|Z ) dZ = P̃ ( Z ′k|Zj 6=k ) ∏ i 6=k P ( Z ′i|Z ′j 6=i ) , ( 4 ) where the indicesm and k are not necessarily equivalent , although we will from here on assume their equivalence.1 Intuitively , R preserves the ICM principle if any localised change in P ( Z ) produces a corresponding localised change in P ( Z ′ ) . The idea is illustrated in Figure 1 . Crucially , this property mirrors the definition of disentanglement in the context of the artificial generative process : each factor in the representation ( P ( Z ′k|Z ′j 6=k ) ) is sensitive to changes in a single generative factor ( P ( Zm|PAm ) ) , while being relatively invariant to changes elsewhere in the generative process . This 1Allowing m 6= k essentially takes into account the possibility of permutation of the original space dimensions in the representation , which is the norm in disentanglement metrics . By assuming m = k we get rid of one degree of freedom in the problem . analogy suggests that the condition expressed in Equation 4 has to be satisfied by a representation that disentangles the true generative process . We call this condition interventional consistency . Interventional consistency can be formulated in terms of invariance and equivariance of the response map . If the autoencoder satisfies interventional consistency , then P ( Z ′i|Z ′j < i ) is invariant to the action of the group of interventions Im localised on P ( Zm|Zj < m ) with i 6= m. Moreover , assuming we are able to perform an intervention on the response variables I ′ equivalent to the one performed on the prior I , the response map is equivariant to the intervention , meaning that the order in which the response and the intervention are applied is not important . From this symmetry viewpoint , interventional consistency belongs to causal representations : ’ the right causal order is the one invariant to the right kind of interventions ’ ( Herb Simon , cf . Hoover ( 2008 ) ) . Importantly , a representation that satisfies the interventional consistency condition does not necessarily solve the identifiability problem ( Gresele et al. , 2021 ) , as any bijective transformation of the true causal factors would equally satisfy the condition for the correspondingly transformed interventions . Instead , it is manipulable by construction . Without this feature , there is no guarantee the decoder can extrapolate to latent samples not seen during training . As an example , consider the common diagnostic tool of latent traversals : it consists of atomic interventions applied to the different dimensions of the representation . Often a good representation is associated with semantically meaningful traversals in the image space . However , the observed result is only half of the story : if the encoder fails to interpret the generated samples the representation can not be called disentangled . To illustrate this example , in Figure 2 we show the error recorded on the response to traversals on a standard autoencoder . 2.1 CONSISTENCY TRAINING . It is possible to use the invariance or equivariance formulation to incorporate interventional consistency into the solution space of an autoencoding problem . Let LINV ( R ) and LEQV ( R ) be real valued functions measuring the violation of the invariance and equivariance property by the response map R , respectively . Then the space of interventionally consistent autoencoders is defined as : { ( E , D ) : LINV ( E ◦D ) = 0 or LEQV ( E ◦D ) = 0 } . This hard constraint is replaced with a differentiable regularization by augmenting the optimization objective with LINV or LEQV . Next , we propose evaluation metrics that make the consistency penalties easier to interpret . Algorithm 1 and 2 in the Appendix demonstrate how to compute LINV and LEQV . Importantly , LINV is independent of the network architecture of choice , therefore it can be applied to any existing autoencoder method with latent space N . Meanwhile , LEQV only requires an explicit distinction between noises N and causes Z ( of the artificial generative process ) outlined in the previous section , which , in our experiments , is implemented by the explicit causal latent block ( see 4 ) . Note some additional mild approximations are necessary to make the penalty terms tractable . Firstly , we assume that the response aggregate posterior P ( Z ′ ) factorises like P ( Z ) , i.e . that the statistical dependencies in the prior are preserved by the response map . When using an autoencoder with explicit structural mappings this assumption is equivalent to assuming that the response aggregate noise posterior P ( N ′ ) factorises . Secondly , we employ Monte Carlo sampling to estimate the consistency by sampling hard interventions on the noise variable distribution of the form : P̃ ( Nm ) ← δ ( v ) , with v from the aggregate marginal posterior Q ( Nm ) = EXQ ( Nm|X ) . Finally , in order to assess equivariance we treat I ′ = I , which relies on the equivalence between Z and Z ′ . This assumption is satisfied as long as the reconstructions are sufficiently similar to corresponding the training samples ( i.e . the autoencoder fidelity is high ) . Below , Nk denotes a sample from the prior P ( N ) and Ñk its intervened version . Moreover we let N̄ ′k and ¯̃Nk′ be the expectation of the response to Nk and Ñk , respectively . Let ˆ̄Nk′ be the result of the intervention I applied to N̄ ′k and we denote the corresponding causal variables by ˆ̄Z ′k and ¯̃Zk . Invariance score . Let UIm be a measure of the magnitude of the intervention Im and σi the standard deviation of each dimension in the response space . Then we define the invariance error in response to the intervention Im recorded on the dimension i as eIm , i = 1 UImσi EN [ || ( N̄ ′ − ¯̃N ′Im ) i||2 ] . More precisely we define UIm : = EN [ || ( N − ÑIm ) i||2 ] . Consequently we score the invariance of the i-th dimension with respect to interventions on the m-th dimension as : INV [ m , i ] = 1− EIm∼Im [ eIm , i ] . ( 5 ) We approximate all the expectations with their Monte Carlo estimates . According to this formulation , a perfectly invariant response map would score 1 on the off-diagonal elements of the matrix . Notice that invariance is not concerned with the diagonal entries of the matrix . However , we can interpret a high invariance score on the i-th diagonal element as a sign of the unresponsiveness of the dimension as it implies interventions on i are not registered in the response . We henceforth refer to these unresponsive latent dimensions with the appellative ” collapsed ” , in reference to the well-known effect of posterior collapse ( rather common in βVAE models ) . Equivariance score . Similarly , we define the equivariance error and equivariance score as : rIm , i = 1 U EN [ || ( ˆ̄Z ′Im − ¯̃Z ′Im ) i||2 ] EQV [ m , i ] = 1− EIm∼Im [ rIm , i ] . In the case of equivariance perfect consistency corresponds to all the entries being 1 . Notice however that high equivariance is also obtained by the trivial solution ( i.e . a constant encoding map ) . This is due to the fact that the intervention values are sampled from the aggregate posterior . Self-consistency score . Finally , the self-consistency score evaluates the response inner consistency under no intervention . More specifically , we measure the amount of error introduced by the response on average over the prior . In formula : SCN [ i ] = 1− 1 σi EN [ || ( N̄ ′ −N ) i||2 ] Importantly , self-consistency is the only score relating the prior and the response space and is closely related to the regularization used by AVAEs ( Cemgil et al. , 2020 ) . 2.2 EXPLICIT CAUSAL LATENT BLOCK Inspired by the distinction between noise and structure in the SCM we insert a similar bias into the latent space of autoencoders by directly joining the representation layer with a structural mixing block as shown in Figure 4 . We henceforth will refer to the botteneck layer as noise terms or N , and to the units produced by the subsequent mixing layer as causes orZ . EachZi is obtained from its corresponding noise term Ni and its predecessors Z < i through a learned nonlinear function fi , much like in an SCM ( implemented as an MLP ) . To increase the sparsity of the structure we apply a learnable mask Mi ∈ R ( i−1 ) to the parents , such that Zi = fi ( Mi Z < i−1 , Ni ) . The masks are parametrised through the Gumbel trick ( Jang et al. , 2016 ) to induce weight sharpening , and thus act as a gating mechanisms of the information provided by the predecessors . We name this extension of the representation layer ” explicit causal latent block ” ( XBlock ) and we call XNet ( or prefix ” X ” ) any network using this module . Most disentanglement methods attempt to learn a representation consisting of causal variables or disentangled ” factors ” which are statistically independent ( Higgins et al. , 2017a ) . However , except in the trivial case , it is not the Zi that should be treated as statistically independent , but the Ni . Consequently , the explicit separation between noise and structure reconciles with the conventional disentanglement perspective while also providing the model with additional power to learn identifiable relationships between the latent variables terms . The causal links revealed by Mi can then be used to help identify how interventions will affect the resulting samples for downstream tasks such as controllable generation . Moreover , differentiating between noise and structure allows to expand the dimensionality of the causal variables Z , without any need to reparametrise the noise distribution . We partition the causes in units , each one consisting of multiple dimensions and representing a single statistical variable . | This work proposes the notion of "interventional consistency" as a beneficial property learned representations should have and introduces a regularization term to enforce it in autoencoders. Moreover, the authors introduce an "explicit latent causal block" that allows learning a structural causal model (SCM) over the latent factors. The experimental section argues that both the regularization and the "causal layer" improve interventional consistency. | SP:5ee804c8b2609b7a2adea1b39361b0b55b49b7bd |
Experience Replay More When It's a Key Transition in Deep Reinforcement Learning | 1 Introduction . Deep reinforcement learning ( RL ) has shown its promising future for decision-making in various computer games , such as atari ( Mnih et al. , 2013 ; 2015 ) , go ( Schrittwieser et al. , 2020 ) and starcraft ( Vinyals et al. , 2019 ) . However , most successes have been exclusively in simulation due to poor sample efficiency of typical Deep RL algorithm and other challenges . Reinforcement learning can be divided into model-based RL and model-free RL in the light of its data efficiency . Model-free RL is usually subdivided into off-policy RL and on-policy RL . Although model-based RL requires less sampled data , it needs to build a world model and predict the next state based on a lot of prior work , demonstrated in Hafner et al . ( 2019 ) . Although the on-policy RL algorithm do without establishing a world model and has good stability , the performance improves slowly due to limiting the step of updating the policy , a large number of sampled trajectory data are required in the training process ( Schulman et al. , 2015 ; 2017 ) . Off-policy RL is between model-based RL and on-policy RL in terms of sampling efficiency , and the research in this field is enduring ( Watkins & Dayan , 1992 ; Hessel et al. , 2018 ; Barth-Maron et al. , 2018 ) on account of its relatively high data efficiency and world model free . Experience Replay ( Lin , 1992 ) is an important part of improving the data efficiency of Offpolicy RL , which stores experience in a replay buffer and break the temporal correlations by mixing data , therefore , the experience can be used multiple times to update the networks . However , most of the current work is to uniformly sample transitions from the buffer , such as Deep Deterministic Policy Gradient ( DDPG ) ( Lillicrap et al. , 2015 ) , Soft Actor-Critic ( SAC ) ( Haarnoja et al. , 2018 ) , Twin Delayed Deep Deterministic policy gradient ( TD3 ) ( Fujimoto et al. , 2018 ) and many other algorithms ( Van Hasselt et al. , 2016 ; Mnih et al. , 2016 ; Andrychowicz et al. , 2017 ; Dabney et al. , 2018 ; Liu et al. , 2020 ) . However , these approaches replay experience transitions at the same frequency , regardless of their meaning . Schaul et al . ( 2016 ) develops a framework for prioritizing experience to replay important transitions more frequently , therefore agent learns more efficiently . Prioritized experience replay ( PER ) method samples transitions with the magnitude of their temporal-difference ( TD ) error . However , prioritization introduces bias , which needs to be corrected with importance sampling . Meta-reinforcement learning ( meta-RL ) algorithms enable agents to learn new skills from small amounts of experience ( Rothfuss et al. , 2018 ; Mishra et al. , 2018 ) , Rakelly et al . ( 2019 ) develops an off-policy meta-RL algorithm that disentangles the inference and control of the task , but its performance is still far behind the mainstream off-policy RL algorithms . In this paper , We first analyzed how the state changes from beginning of agent interacting with the environment to the policy converged , and found that the state of the agent is different at different stages , agent will rarely transfer to those states where the policy is far from converging when the agent ’ s policy already converged . If a large number of initialstates1 are used to train Q-value and policy networks , the network weights that have been learned well will be gradually updated . Therefore , the sampling proportion of the most recently generated transitions should be appropriately increased , so that the agent pays more attention to learning the recent transitions , In this way , the state of the agent is gradually transferred from a poor state to a better state , similar to updating from a poor policy to a better policy stabily in Trust region policy optimization ( TRPO ) ( Schulman et al. , 2015 ) or Proximal policy optimization ( PPO ) ( Schulman et al. , 2017 ) . Inspired by the Add Noise to Noise ( AN2N ) Algorithm ( Guo & Gao , 2021 ) , We divide the states into two categories . These states explored with added noise are called key states2 , and the rest are called non-key states . For the sake of improving the performance in key states in time , we have increased the probability of sampling new key states , making them more likely to participate in updating policy , we call this process as ERM ( Experience Replay More ) , and combine it with commonly used off-policy RL algorithms in continuous control tasks , such as SAC , which obtained faster convergence and state-of-the-art performance . 2 Preliminaries . We consider a reinforcement learning setup consisting of agent learning policies to maximize the expected reward when interacting with the environment ( Sutton & Barto , 2018 ) . At each timestep t , the agent receives an observation ot ∈ O , selects action at ∈ A with respect to its policy π : O → A . After taking the action at in environment E , agent receives a reward rt and the next observation ot+1 . The practical problem is usually a partial Markov decision process ( POMDP ) , only part of the observation information could be obtained . To simplify the problem , we assumed the environment is fully-observed , so st = ot , S = O . In reinforcement learning , the action-value function Qπ ( s , a ) is uesed to approximate the expected sum reward of the ation a in state s , defined as following : Qπ ( s , a ) = Est∼pπ , at∼π [ +∞∑ t=0 γtR ( st , at ) ] ( 1 ) Where γ ∈ [ 0 , 1 ] is the discount factor , Est∼pπ , at∼π is the expectation over the distribution of the trajectories ( s0 , a0 , s1 , a1 , . . . ) . The mean value of Qπ in the same state s called the value function V π , defined as V π ( s ) = Ea∼π ( ·|s ) [ Qπ ( s , a ) ] . We express the action-value function Qπ in the form of Bellman equation ( Bellman & Kalaba , 1965 ) : Qπ ( st , at ) = Est+1∼pπ [ r ( st , at ) + γEat+1∼π [ Qπ ( st+1 , at+1 ) ] ] ( 2 ) 1The initial-state is the state in which agent do not perform well when policy has not converged , agent almost no longer transferred to this kind of state after policy converged . 2The key state is a state where the agent has not performed well in the past In this paper , we need to be familiar with DDPG , TD3 and SAC algorithms , here , we mainly introduce DDPG as the basis . DDPG applied two different fully connected neural networks to approximate the action-value function Q ( s , a|θQ ) and policy function µ ( s|θµ ) , DDPG introduces action-value target network θQ ′ and policy target network θµ ′ , so as to Stable the policy update . Consequently , gradient descent is used to optimize the network weights by minimizing the loss : L ( θQ ) = Est∼pµ ( st|θµ ) , at∼µ ( st|θµ ) [ ( Q ( st , at|θ Q ) − yt ) 2 ] ( 3 ) Where yt = r ( st , at ) + γQ ′ ( st+1 , µ ′ ( st+1|θµ ′ ) |θQ ′ ) ( 4 ) ∇θµJ ≈ Es∼p ( st|θµ ) [ ∇θµQ ( s , a|θQ ) |s=st , a=µ ( st|θµ ) ] = Es∼p ( st|θµ ) [ ∇aQ ( s , a|θ Q ) |s=st , a=µ ( st ) ∇θµµ ( st|θ µ ) |s = st ] ( 5 ) Where equation 4 derived from equation 2 , the weights of target networks are updated periodically to slowly track the learned networks : θ′ ← τθ+ ( 1− τ ) θ′ with τ ≪ 1 , which alleviates the fluctuation in the agent ’ s learning process . The policy is updated by equation 5 , following the chain rule to the expected sum return Q ( s , a|θQ ) with respect to parameters θµ . TD3 addresses the problem that DDPG is prone to overestimating the Q function . An additional Q-function is added , and the predicted smaller Q value is used as the calculation of TD-error . At the same time , which also reduces the update frequency of the policy function . The most significant difference between SAC and DDPG is the introduction of policy entropy H ( π ( ·|st ) ) , so the objective function equation 1 is rewritten as : Qπ ( s , a ) = Est∼pπ , at∼π [ +∞∑ t=0 γtR ( st , at ) − α log ( π ( at|st ) ) ] ( 6 ) Where α is temperature parameter , which adjusts the optimization target , agent pays more attention to exploration if increase the coefficient α . 3 Experience Replay More When It ’ s a Key Transition . The addition of experience replay improves the sample efficiency of off-policy RL . Many off-policy algorithms usually sample transitions uniformly from the experience buffer , which potentially considers the transitions generated at different times to be of the same importance , we will discuss this in section 3.1 . Prioritized Experience Replay ( PER ) is an optimization of the uniform sampling method based on the TD-error value of transitions so as to learn the samples more efficiently , however , PER essentially still does not consider whether the transitions generated at different times are equally important to the current agent ’ s policy . 3.1 States are Different at Different Stages Taking the HalfCheetah-v2 simulation environment as an example , we recorded the state information of an agent at different stages based on DDPG algorithm , collecting attitude information at the specific position and the corresponding Q-value starting from 2e4 to 5.6e5 , and then we uniformly sample 100 sets of data from the collected data for drawing , as shown in the Fig . 1 . The legend in the upper left corner represents the Q value , which is discretized to reasonably reduce the number of Legends . Subgraph ( a ) is discreted in units of 20 , and subgraph b is discreted in units of 100 . It can be found when the agent interacts with the environment , not only the policy is gradually improving3 , but the state is also changing accordingly . After policy converges and the agent is in a state set different from the previous one , more experience similar to the agent ’ s recent states should be collected to train the neural network to strengthen the current policy . If the sampling experience for training policy network and the action state network are quite different from the recently generated states , it will not only help improve the policy to a small extent , but will also even forget the newly learned network weights due to the catastrophic forgetting of the neural networks . Therefore , to further improving the sample efficiency , experience replay should be switched from uniform sampling to targeted sampling . 3.2 Replaying More Key Transitions The AN2N algorithm will record the dilemma state sd during the evaluation process , so when agent interacts with the environment , if the current state sc and sd are considered to be highly similar , an additional exploration noise Na will be added . In this paper , the state sc with additional noise is called the key state sk , and the transition containing sk is called the key transition trank . The agent ’ s policy is relatively fragile in sk , if agent has not learned how to get out of this dilemma before , agent is more likely to fall into a series of bad 3Better decision-making ability usually gets a higher Q value states after sk , which greatly reduces the agent ’ s overall performance . Therefore , whether the agent can learn a good policy in sk is very important for improving the performance . It can be seen from equation 2 that reward of the key state sk will affect the Q value of the state at the previous moment through the iteration of the dynamic equation , nevertheless , if we uniformly sample experience for training the networks , there will be the following three problems : • Since most of the experience generated by AN2N are non-key states , the probability of sampling key states sk in the experience buffer is small . • The key state sk is time-sensitive . The new key state needs to participate in training the network as quickly as possible . Otherwise , if agent ’ s state undergoes a relatively change , its role in improving the policy will be declined . • Since the agent ’ s state is gradually changing , more recent the sampled experience , more similar its distribution is to the distribution of the latest experience generated currently , and the more it satisfies the assumption of independent and identical distribution ( iid ) . In response to the first question , we designed two experience buffers to store key transition and non-key transition respectively , and used min ( PrtAN2N , Kt ) to adjust the proportion of sampling key transition , where PrtAN2N is the proportion of the number of key transitions generated by AN2N , Kt is linearly related to the simulation times of the agent . The above ensures that key transitions can be sampled strictly according to the proportion from the experience buffer . For the second and third questions , we will linearly increase the probability of new transition being sampled . Two sampling functions will sample 100 samples in [ 0 , 20 ] to compare the difference of the linear distribution sampling function and the uniform distribution more vividly , the result shown in Fig . 2 . The pseudo code of ERM algorithm is shown in algorithm 1 . Algorithm 1 : ERM Input : Sampling ratio PrtAN2N , Kt , Replay buffer Rnon−key , Rkey , batch size bs1 , bs2 , bssum and AN2N parameters Randomly initialize critic network Q ( s , a|θQ ) and actor µ ( s|θµ ) with weights θQ and θµ Initialize target network Q′ and µ′ with weights θQ ′ ← θQ , θµ ′ ← θµ for episode e ∈ { 1 , ... , M } do Initialize a random process N for action exploration Receive initial observation state s1 for t ∈ { 1 , ... , T } do Execute AN2N action at and observe reward rt and observe new state st+1 if ( AN2N exploring more ) then Store key transitio ( st , at , rt , st+1 ) in Rkey else Store transition ( st , at , rt , st+1 ) in Rnon−key end bs1 = min ( PrtAN2N , Kt ) bs2 = bssum − bs1 Sample bs1 and bs2 transitions with linear distribution in Rkey and Rnon−key for training Run DDPG , SAC or TD3 etc . Algorithm end end | The paper proposes to set up experience replay such that transitions are emphasized based on whether the AN2N algorithm decides to explore more, and how recent the transition was experienced. It is based on observations that the state distribution can dramatically drift over the course of policy improvement, and the intuition that it might be better to perform updates on states that the current policy is more likely to actually visit (recent transitions). They apply the proposed sampling method, *experience replay more* (ERM), to a variety of replay-based deep RL algorithms and evaluate them empirically. | SP:fff4e3a93cc5acdc256c9d12c95d23d9ed458a85 |
Experience Replay More When It's a Key Transition in Deep Reinforcement Learning | 1 Introduction . Deep reinforcement learning ( RL ) has shown its promising future for decision-making in various computer games , such as atari ( Mnih et al. , 2013 ; 2015 ) , go ( Schrittwieser et al. , 2020 ) and starcraft ( Vinyals et al. , 2019 ) . However , most successes have been exclusively in simulation due to poor sample efficiency of typical Deep RL algorithm and other challenges . Reinforcement learning can be divided into model-based RL and model-free RL in the light of its data efficiency . Model-free RL is usually subdivided into off-policy RL and on-policy RL . Although model-based RL requires less sampled data , it needs to build a world model and predict the next state based on a lot of prior work , demonstrated in Hafner et al . ( 2019 ) . Although the on-policy RL algorithm do without establishing a world model and has good stability , the performance improves slowly due to limiting the step of updating the policy , a large number of sampled trajectory data are required in the training process ( Schulman et al. , 2015 ; 2017 ) . Off-policy RL is between model-based RL and on-policy RL in terms of sampling efficiency , and the research in this field is enduring ( Watkins & Dayan , 1992 ; Hessel et al. , 2018 ; Barth-Maron et al. , 2018 ) on account of its relatively high data efficiency and world model free . Experience Replay ( Lin , 1992 ) is an important part of improving the data efficiency of Offpolicy RL , which stores experience in a replay buffer and break the temporal correlations by mixing data , therefore , the experience can be used multiple times to update the networks . However , most of the current work is to uniformly sample transitions from the buffer , such as Deep Deterministic Policy Gradient ( DDPG ) ( Lillicrap et al. , 2015 ) , Soft Actor-Critic ( SAC ) ( Haarnoja et al. , 2018 ) , Twin Delayed Deep Deterministic policy gradient ( TD3 ) ( Fujimoto et al. , 2018 ) and many other algorithms ( Van Hasselt et al. , 2016 ; Mnih et al. , 2016 ; Andrychowicz et al. , 2017 ; Dabney et al. , 2018 ; Liu et al. , 2020 ) . However , these approaches replay experience transitions at the same frequency , regardless of their meaning . Schaul et al . ( 2016 ) develops a framework for prioritizing experience to replay important transitions more frequently , therefore agent learns more efficiently . Prioritized experience replay ( PER ) method samples transitions with the magnitude of their temporal-difference ( TD ) error . However , prioritization introduces bias , which needs to be corrected with importance sampling . Meta-reinforcement learning ( meta-RL ) algorithms enable agents to learn new skills from small amounts of experience ( Rothfuss et al. , 2018 ; Mishra et al. , 2018 ) , Rakelly et al . ( 2019 ) develops an off-policy meta-RL algorithm that disentangles the inference and control of the task , but its performance is still far behind the mainstream off-policy RL algorithms . In this paper , We first analyzed how the state changes from beginning of agent interacting with the environment to the policy converged , and found that the state of the agent is different at different stages , agent will rarely transfer to those states where the policy is far from converging when the agent ’ s policy already converged . If a large number of initialstates1 are used to train Q-value and policy networks , the network weights that have been learned well will be gradually updated . Therefore , the sampling proportion of the most recently generated transitions should be appropriately increased , so that the agent pays more attention to learning the recent transitions , In this way , the state of the agent is gradually transferred from a poor state to a better state , similar to updating from a poor policy to a better policy stabily in Trust region policy optimization ( TRPO ) ( Schulman et al. , 2015 ) or Proximal policy optimization ( PPO ) ( Schulman et al. , 2017 ) . Inspired by the Add Noise to Noise ( AN2N ) Algorithm ( Guo & Gao , 2021 ) , We divide the states into two categories . These states explored with added noise are called key states2 , and the rest are called non-key states . For the sake of improving the performance in key states in time , we have increased the probability of sampling new key states , making them more likely to participate in updating policy , we call this process as ERM ( Experience Replay More ) , and combine it with commonly used off-policy RL algorithms in continuous control tasks , such as SAC , which obtained faster convergence and state-of-the-art performance . 2 Preliminaries . We consider a reinforcement learning setup consisting of agent learning policies to maximize the expected reward when interacting with the environment ( Sutton & Barto , 2018 ) . At each timestep t , the agent receives an observation ot ∈ O , selects action at ∈ A with respect to its policy π : O → A . After taking the action at in environment E , agent receives a reward rt and the next observation ot+1 . The practical problem is usually a partial Markov decision process ( POMDP ) , only part of the observation information could be obtained . To simplify the problem , we assumed the environment is fully-observed , so st = ot , S = O . In reinforcement learning , the action-value function Qπ ( s , a ) is uesed to approximate the expected sum reward of the ation a in state s , defined as following : Qπ ( s , a ) = Est∼pπ , at∼π [ +∞∑ t=0 γtR ( st , at ) ] ( 1 ) Where γ ∈ [ 0 , 1 ] is the discount factor , Est∼pπ , at∼π is the expectation over the distribution of the trajectories ( s0 , a0 , s1 , a1 , . . . ) . The mean value of Qπ in the same state s called the value function V π , defined as V π ( s ) = Ea∼π ( ·|s ) [ Qπ ( s , a ) ] . We express the action-value function Qπ in the form of Bellman equation ( Bellman & Kalaba , 1965 ) : Qπ ( st , at ) = Est+1∼pπ [ r ( st , at ) + γEat+1∼π [ Qπ ( st+1 , at+1 ) ] ] ( 2 ) 1The initial-state is the state in which agent do not perform well when policy has not converged , agent almost no longer transferred to this kind of state after policy converged . 2The key state is a state where the agent has not performed well in the past In this paper , we need to be familiar with DDPG , TD3 and SAC algorithms , here , we mainly introduce DDPG as the basis . DDPG applied two different fully connected neural networks to approximate the action-value function Q ( s , a|θQ ) and policy function µ ( s|θµ ) , DDPG introduces action-value target network θQ ′ and policy target network θµ ′ , so as to Stable the policy update . Consequently , gradient descent is used to optimize the network weights by minimizing the loss : L ( θQ ) = Est∼pµ ( st|θµ ) , at∼µ ( st|θµ ) [ ( Q ( st , at|θ Q ) − yt ) 2 ] ( 3 ) Where yt = r ( st , at ) + γQ ′ ( st+1 , µ ′ ( st+1|θµ ′ ) |θQ ′ ) ( 4 ) ∇θµJ ≈ Es∼p ( st|θµ ) [ ∇θµQ ( s , a|θQ ) |s=st , a=µ ( st|θµ ) ] = Es∼p ( st|θµ ) [ ∇aQ ( s , a|θ Q ) |s=st , a=µ ( st ) ∇θµµ ( st|θ µ ) |s = st ] ( 5 ) Where equation 4 derived from equation 2 , the weights of target networks are updated periodically to slowly track the learned networks : θ′ ← τθ+ ( 1− τ ) θ′ with τ ≪ 1 , which alleviates the fluctuation in the agent ’ s learning process . The policy is updated by equation 5 , following the chain rule to the expected sum return Q ( s , a|θQ ) with respect to parameters θµ . TD3 addresses the problem that DDPG is prone to overestimating the Q function . An additional Q-function is added , and the predicted smaller Q value is used as the calculation of TD-error . At the same time , which also reduces the update frequency of the policy function . The most significant difference between SAC and DDPG is the introduction of policy entropy H ( π ( ·|st ) ) , so the objective function equation 1 is rewritten as : Qπ ( s , a ) = Est∼pπ , at∼π [ +∞∑ t=0 γtR ( st , at ) − α log ( π ( at|st ) ) ] ( 6 ) Where α is temperature parameter , which adjusts the optimization target , agent pays more attention to exploration if increase the coefficient α . 3 Experience Replay More When It ’ s a Key Transition . The addition of experience replay improves the sample efficiency of off-policy RL . Many off-policy algorithms usually sample transitions uniformly from the experience buffer , which potentially considers the transitions generated at different times to be of the same importance , we will discuss this in section 3.1 . Prioritized Experience Replay ( PER ) is an optimization of the uniform sampling method based on the TD-error value of transitions so as to learn the samples more efficiently , however , PER essentially still does not consider whether the transitions generated at different times are equally important to the current agent ’ s policy . 3.1 States are Different at Different Stages Taking the HalfCheetah-v2 simulation environment as an example , we recorded the state information of an agent at different stages based on DDPG algorithm , collecting attitude information at the specific position and the corresponding Q-value starting from 2e4 to 5.6e5 , and then we uniformly sample 100 sets of data from the collected data for drawing , as shown in the Fig . 1 . The legend in the upper left corner represents the Q value , which is discretized to reasonably reduce the number of Legends . Subgraph ( a ) is discreted in units of 20 , and subgraph b is discreted in units of 100 . It can be found when the agent interacts with the environment , not only the policy is gradually improving3 , but the state is also changing accordingly . After policy converges and the agent is in a state set different from the previous one , more experience similar to the agent ’ s recent states should be collected to train the neural network to strengthen the current policy . If the sampling experience for training policy network and the action state network are quite different from the recently generated states , it will not only help improve the policy to a small extent , but will also even forget the newly learned network weights due to the catastrophic forgetting of the neural networks . Therefore , to further improving the sample efficiency , experience replay should be switched from uniform sampling to targeted sampling . 3.2 Replaying More Key Transitions The AN2N algorithm will record the dilemma state sd during the evaluation process , so when agent interacts with the environment , if the current state sc and sd are considered to be highly similar , an additional exploration noise Na will be added . In this paper , the state sc with additional noise is called the key state sk , and the transition containing sk is called the key transition trank . The agent ’ s policy is relatively fragile in sk , if agent has not learned how to get out of this dilemma before , agent is more likely to fall into a series of bad 3Better decision-making ability usually gets a higher Q value states after sk , which greatly reduces the agent ’ s overall performance . Therefore , whether the agent can learn a good policy in sk is very important for improving the performance . It can be seen from equation 2 that reward of the key state sk will affect the Q value of the state at the previous moment through the iteration of the dynamic equation , nevertheless , if we uniformly sample experience for training the networks , there will be the following three problems : • Since most of the experience generated by AN2N are non-key states , the probability of sampling key states sk in the experience buffer is small . • The key state sk is time-sensitive . The new key state needs to participate in training the network as quickly as possible . Otherwise , if agent ’ s state undergoes a relatively change , its role in improving the policy will be declined . • Since the agent ’ s state is gradually changing , more recent the sampled experience , more similar its distribution is to the distribution of the latest experience generated currently , and the more it satisfies the assumption of independent and identical distribution ( iid ) . In response to the first question , we designed two experience buffers to store key transition and non-key transition respectively , and used min ( PrtAN2N , Kt ) to adjust the proportion of sampling key transition , where PrtAN2N is the proportion of the number of key transitions generated by AN2N , Kt is linearly related to the simulation times of the agent . The above ensures that key transitions can be sampled strictly according to the proportion from the experience buffer . For the second and third questions , we will linearly increase the probability of new transition being sampled . Two sampling functions will sample 100 samples in [ 0 , 20 ] to compare the difference of the linear distribution sampling function and the uniform distribution more vividly , the result shown in Fig . 2 . The pseudo code of ERM algorithm is shown in algorithm 1 . Algorithm 1 : ERM Input : Sampling ratio PrtAN2N , Kt , Replay buffer Rnon−key , Rkey , batch size bs1 , bs2 , bssum and AN2N parameters Randomly initialize critic network Q ( s , a|θQ ) and actor µ ( s|θµ ) with weights θQ and θµ Initialize target network Q′ and µ′ with weights θQ ′ ← θQ , θµ ′ ← θµ for episode e ∈ { 1 , ... , M } do Initialize a random process N for action exploration Receive initial observation state s1 for t ∈ { 1 , ... , T } do Execute AN2N action at and observe reward rt and observe new state st+1 if ( AN2N exploring more ) then Store key transitio ( st , at , rt , st+1 ) in Rkey else Store transition ( st , at , rt , st+1 ) in Rnon−key end bs1 = min ( PrtAN2N , Kt ) bs2 = bssum − bs1 Sample bs1 and bs2 transitions with linear distribution in Rkey and Rnon−key for training Run DDPG , SAC or TD3 etc . Algorithm end end | The paper proposes a new method (ERM) to bias the decision over which transitions should be replayed more often. In particular, using ERM, some states are deemed as key and they are replayed more often (following also a recency bias). This new method is then tested in 4 environments on the DeepMind control suite by using 3 different RL algorithms (DDPG, SAC, TD3). | SP:fff4e3a93cc5acdc256c9d12c95d23d9ed458a85 |
Abelian Neural Networks | 1 INTRODUCTION . The vector representations of words called word2vec ( Mikolov et al. , 2013a ; b ) trained only on large unlabeled text data are known to capture linear regularities between words . For example , vec ( “ king ” ) −vec ( “ man ” ) +vec ( “ woman ” ) results in the most similar vector to vec ( “ queen ” ) . Similar results have been observed not only in other word embeddings ( Mnih & Kavukcuoglu , 2013 ; Pennington et al. , 2014 ) but also in various embedding spaces , such as combined embeddings of text and image ( Kiros et al. , 2014 ) , emoji embeddings ( Eisner et al. , 2016 ) , latent representation of deep convolutional generative adversarial networks ( Radford et al. , 2016 ) , and feature space of pretrained image models ( Upchurch et al. , 2017 ) . Although the reports are interesting and attractive , such approaches have shortcomings . Since those methods of learning from unlabelled data usually do not explicitly incorporate learning analogical structure , there is no guarantee that the acquired embedding space has linear relations between instance pairs even if training itself works well . Even when an embedding space captures some kinds of analogies , it might not work for other kinds of analogies . Indeed , word2vec representation works well for inflectional analogies ( 68.22 % accuracy ) but poorly for encyclopedic analogies ( 7.11 % accuracy ) in our preliminary experiments ( see Table 3 in Section 4.2 ) . In such a case , it is quite difficult to tune the training algorithm for certain kinds of analogies you want to use . To alleviate these issues , we propose to directly learn analogical relations on the embedding space from labeled data . One challenge in learning analogy in a supervised manner is how to model analogical functions . A naive way to do this is to train two separate models corresponding to addition and subtraction , respectively ; however , it does not reflect the analogical structure and might be inefficient . In this work , we propose an Abelian group network to incorporate an analogical inductive bias into a neural network . The proposed network is designed to satisfy the Abelian group condition by using an invertible neural network . We also show that the Abelian group network is a universal approximator of smooth Abelian group operations . Since the inverse element in the Abelian group network and its gradient are analytically computable , we can train it for analogy tasks by common techniques for deep learning , such as stochastic gradient descent . As a side effect of the algebraic structure , we can construct a permutation invariant function , i.e. , a function for multisets , by repeatedly composing an Abelian group operation for multiple inputs . Multiset models can handle inputs of different sizes , and it is important for the models to automatically generalize between different size inputs , especially from small to large ones . However , existing multiset models ( Zaheer et al. , 2017 ; Qi et al. , 2017 ) have no such theoretical guarantee . On the other hand , our multiset models naturally induce the size-generalization capability because the output for larger inputs can be written as the composition of small elements . Further , we show that a necessary and sufficient condition for the composed function being permutation invariant is that the binary operation forms an Abelian semigroup , and we propose an Abelian semigroup network , by using the characterization of associative symmetric polynomials . 2 PRELIMINARIES . 2.1 DEFINITIONS . In this section , let us introduce some basic notations and important definitions that will play a key role in this work . 2.1.1 BASIC NOTATIONS . By N , we represent the set of the natural numbers including 0 . We denote a vector by a bold symbol , e.g. , x . Let x ∈ Rd be a d-dimensional vector . We represent the i-th element ( 1 ≤ i ≤ d ) of x by xi . For 1 ≤ k ≤ d , x≤k ∈ Rk is the k-dimensional vector ( x1 , . . . xk ) and x < k ∈ Rk−1 is the ( k − 1 ) -dimensional vector ( x1 , . . . xk−1 ) . We denote the elementwise product of two vectors x , y ∈ Rd by x ⊗ y , such that ( x ⊗ y ) i = xiyi and the elementwise division of two vectors x ∈ Rd , y ∈ ( R \ { 0 } ) d by x y , such that ( x y ) i = xi/yi . By ‖ · ‖ , we represent the L2 ( Euclidean ) norm . 2.1.2 MULTISET AND PERMUTATION INVARIANCE . Here , we use X and Y to describe some domains , which are typically Euclidean spaces , i.e. , X = Rd1 and Y = Rd2 . We denote the set of multisets over X by NX . We use { x1 , . . .xn } ∈ NX to describe a multiset composed of x1 , . . . , xn ∈ X ( any confusion with sets is not problematic in this paper ) . The cardinality of a multiset is the number of elements with multiplicity and is expressed by | · | , e.g. , | { 1 , 2 , 2 , 3 } | = 4 . For n ∈ N , a symmetric group Sn is the set of all n ! bijective functions σ : { 1 , 2 , . . . , n } → { 1 , 2 , . . . , n } . For a permutation σ ∈ Sn andX ∈ X , σ ·X is defined such that ( σ ·X ) i = Xσ ( i ) for all i ∈ { 1 , . . . , n } . A function f : Xn → Y is said to be permutation invariant if for any X ∈ Xn and for any permutation σ ∈ Sn , f ( σ ·X ) = f ( X ) holds . This concept can be extended to functions that take vectors of different dimensions . Namely , a function f : ⋃ k∈N X k → Y is called permutation invariant if for any k ∈ N , for anyX ∈ X k and for any permutation σ ∈ Sk , f ( σ ·X ) = f ( X ) holds . When f : ⋃ k∈N X k → Y is permutation invariant , it can be also viewed as a function that takes multisets as input . For notation simplicity , we sometimes use the same symbol to express the multiset function : f : NX → Y . 2.1.3 UNIVERSALITY . Universality is an important theoretical property of neural networks ’ expressive power . Let X and Y be an input domain and an output domain , respectively . We consider a modelM and a class of target functions F , both of which are sets of functions X → Y . The modelM is a sup-universal approximator of F if for any target function f∗ ∈ F , for any > 0 , and for any compact subset K ⊂ X , there exists a function f ∈M such that sup x∈K ‖f ( x ) − f∗ ( x ) ‖ < . ( 1 ) If not noted otherwise , universality refers to the sup-universal property . 2.1.4 BASIC ALGEBRA . Here , we introduce the basic definition of important algebraic structures in this study . Let G be a set and ◦ : G × G → G be a binary operation . Below , we review four properties to define Abelian semigroups and groups . Associativity For any x , y , z ∈ G , ( x ◦ y ) ◦ z = x ◦ ( y ◦ z ) . Identity Element There exists an element e ∈ G , called the identity element , such that for any x ∈ G , x ◦ e = e ◦ x = x. Inverse Element For any x ∈ G , there exists an element x−1 ∈ G , called the inverse element of x , such that x ◦ x−1 = x−1 ◦ x = e. Commutativity For any x , y ∈ G , x ◦ y = y ◦ x . Table 1 shows which properties are required in each algebraic structure . A semigroup only requires associativity to the binary operation . A group is a semigroup with an identity element and inverse elements . An Abelian ( semi ) group is a ( semi ) group with commutativity . 2.2 INVERTIBLE NEURAL NETWORKS . Invertible neural networks are neural networks that approximate invertible functions Rd → Rd . Here , we review some existing studies for multi-dimensional case , i.e. , d ≥ 2 , and singledimensional case , i.e. , d = 1 . 2.2.1 NORMALIZING FLOWS . Multi-dimensional invertible neural networks have been studied mainly in the context of normalizing flows ( Tabak & Vanden-Eijnden , 2010 ) , which iteratively apply invertible functions to a simple original probability distribution to express complex probability distributions ( Kobyzev et al. , 2020 ; Papamakarios et al. , 2019 ) . There have been many variants proposed including residual flows ( Behrmann et al. , 2019 ) , neural ODEs ( Chen et al. , 2018 ) , and autoregressive flows ( Kingma et al. , 2017 ) . Here we review affine coupling flows ( Dinh et al. , 2015 ) , one of the most popular models with parallelizable efficient inverse computation . Each layer of the affine coupling flows maps x = ( x1 , . . . , xd ) ∈ Rd to y = ( y1 , . . . , yd ) ∈ Rd such that { y≤k = x≤k , y > k = x > k ⊗ exp ( α ( x≤k ) ) + β ( x≤k ) , ( 2 ) where exp is applied elementwise and α , β : Rk → Rd−k are trainable functions . The inverse is computed as follows : { x≤k = y≤k , x > k = ( y > k − β ( y≤k ) ) ⊗ exp ( −α ( y≤k ) ) . ( 3 ) They are used in many successful applications such as NICE ( Dinh et al. , 2015 ) , Real NVP ( Dinh et al. , 2017 ) , and Glow ( Kingma & Dhariwal , 2018 ) . Although the normalizing flows have a limited form of transform , they still admit universalities on certain classes of functions ( Teshima et al. , 2020 ) . The affine coupling flows are Lp-universal ( weaker condition of universality ) for C2-diffeomorphisms . Some more complex models , including deep sigmoidal flows ( Huang et al. , 2018 ) and sum-of-squares polynomial flows ( Jaini et al. , 2019 ) , are universal for C2-diffeomorphisms . 2.2.2 SINGLE-DIMENSIONAL INVERTIBLE NEURAL NETWORKS . For single-dimensional functions , invertibility is equivalent to strict monotonicity . Monotonic networks ( Sill , 1997 ) model strictly monotonic functions . The monotonically increasing version with K groups and Jk units for k-th group is as follows : f ( x ) = min 1≤k≤K max 1≤j≤Jk exp ( w̃ ( k , j ) ) · x+ b ( k , j ) , ( 4 ) where w̃ ( k , j ) , b ( k , j ) ∈ R are trainable parameters . The monotonic networks are a universal approximator for strictly monotonic differentiable functions . Monotonic rational-quadratic transforms ( Durkan et al. , 2019 ) are another universal model for the single-dimensional case . | The authors introduce Abelian group networks (AGN) that explicitly model the operational relation between elements in an Abelian group. The authors prove that the design has the ability to model such relations and present feasible neural network realizations. The authors also prove the ability of AGN to learn representations of multisets. The authors design experiments to test AGN's ability to model word analogy by viewing the "a:b = c:d" relation as $c - a + b$. The results seem to suggest that AGN outperforms MLP baselines always and word vectors in some cases. However, closer examinations reveal concerns about their strength and validity to support the efficacy of AGN. | SP:76143383d6721e00317c6a3f0b04929f89c8b982 |
Abelian Neural Networks | 1 INTRODUCTION . The vector representations of words called word2vec ( Mikolov et al. , 2013a ; b ) trained only on large unlabeled text data are known to capture linear regularities between words . For example , vec ( “ king ” ) −vec ( “ man ” ) +vec ( “ woman ” ) results in the most similar vector to vec ( “ queen ” ) . Similar results have been observed not only in other word embeddings ( Mnih & Kavukcuoglu , 2013 ; Pennington et al. , 2014 ) but also in various embedding spaces , such as combined embeddings of text and image ( Kiros et al. , 2014 ) , emoji embeddings ( Eisner et al. , 2016 ) , latent representation of deep convolutional generative adversarial networks ( Radford et al. , 2016 ) , and feature space of pretrained image models ( Upchurch et al. , 2017 ) . Although the reports are interesting and attractive , such approaches have shortcomings . Since those methods of learning from unlabelled data usually do not explicitly incorporate learning analogical structure , there is no guarantee that the acquired embedding space has linear relations between instance pairs even if training itself works well . Even when an embedding space captures some kinds of analogies , it might not work for other kinds of analogies . Indeed , word2vec representation works well for inflectional analogies ( 68.22 % accuracy ) but poorly for encyclopedic analogies ( 7.11 % accuracy ) in our preliminary experiments ( see Table 3 in Section 4.2 ) . In such a case , it is quite difficult to tune the training algorithm for certain kinds of analogies you want to use . To alleviate these issues , we propose to directly learn analogical relations on the embedding space from labeled data . One challenge in learning analogy in a supervised manner is how to model analogical functions . A naive way to do this is to train two separate models corresponding to addition and subtraction , respectively ; however , it does not reflect the analogical structure and might be inefficient . In this work , we propose an Abelian group network to incorporate an analogical inductive bias into a neural network . The proposed network is designed to satisfy the Abelian group condition by using an invertible neural network . We also show that the Abelian group network is a universal approximator of smooth Abelian group operations . Since the inverse element in the Abelian group network and its gradient are analytically computable , we can train it for analogy tasks by common techniques for deep learning , such as stochastic gradient descent . As a side effect of the algebraic structure , we can construct a permutation invariant function , i.e. , a function for multisets , by repeatedly composing an Abelian group operation for multiple inputs . Multiset models can handle inputs of different sizes , and it is important for the models to automatically generalize between different size inputs , especially from small to large ones . However , existing multiset models ( Zaheer et al. , 2017 ; Qi et al. , 2017 ) have no such theoretical guarantee . On the other hand , our multiset models naturally induce the size-generalization capability because the output for larger inputs can be written as the composition of small elements . Further , we show that a necessary and sufficient condition for the composed function being permutation invariant is that the binary operation forms an Abelian semigroup , and we propose an Abelian semigroup network , by using the characterization of associative symmetric polynomials . 2 PRELIMINARIES . 2.1 DEFINITIONS . In this section , let us introduce some basic notations and important definitions that will play a key role in this work . 2.1.1 BASIC NOTATIONS . By N , we represent the set of the natural numbers including 0 . We denote a vector by a bold symbol , e.g. , x . Let x ∈ Rd be a d-dimensional vector . We represent the i-th element ( 1 ≤ i ≤ d ) of x by xi . For 1 ≤ k ≤ d , x≤k ∈ Rk is the k-dimensional vector ( x1 , . . . xk ) and x < k ∈ Rk−1 is the ( k − 1 ) -dimensional vector ( x1 , . . . xk−1 ) . We denote the elementwise product of two vectors x , y ∈ Rd by x ⊗ y , such that ( x ⊗ y ) i = xiyi and the elementwise division of two vectors x ∈ Rd , y ∈ ( R \ { 0 } ) d by x y , such that ( x y ) i = xi/yi . By ‖ · ‖ , we represent the L2 ( Euclidean ) norm . 2.1.2 MULTISET AND PERMUTATION INVARIANCE . Here , we use X and Y to describe some domains , which are typically Euclidean spaces , i.e. , X = Rd1 and Y = Rd2 . We denote the set of multisets over X by NX . We use { x1 , . . .xn } ∈ NX to describe a multiset composed of x1 , . . . , xn ∈ X ( any confusion with sets is not problematic in this paper ) . The cardinality of a multiset is the number of elements with multiplicity and is expressed by | · | , e.g. , | { 1 , 2 , 2 , 3 } | = 4 . For n ∈ N , a symmetric group Sn is the set of all n ! bijective functions σ : { 1 , 2 , . . . , n } → { 1 , 2 , . . . , n } . For a permutation σ ∈ Sn andX ∈ X , σ ·X is defined such that ( σ ·X ) i = Xσ ( i ) for all i ∈ { 1 , . . . , n } . A function f : Xn → Y is said to be permutation invariant if for any X ∈ Xn and for any permutation σ ∈ Sn , f ( σ ·X ) = f ( X ) holds . This concept can be extended to functions that take vectors of different dimensions . Namely , a function f : ⋃ k∈N X k → Y is called permutation invariant if for any k ∈ N , for anyX ∈ X k and for any permutation σ ∈ Sk , f ( σ ·X ) = f ( X ) holds . When f : ⋃ k∈N X k → Y is permutation invariant , it can be also viewed as a function that takes multisets as input . For notation simplicity , we sometimes use the same symbol to express the multiset function : f : NX → Y . 2.1.3 UNIVERSALITY . Universality is an important theoretical property of neural networks ’ expressive power . Let X and Y be an input domain and an output domain , respectively . We consider a modelM and a class of target functions F , both of which are sets of functions X → Y . The modelM is a sup-universal approximator of F if for any target function f∗ ∈ F , for any > 0 , and for any compact subset K ⊂ X , there exists a function f ∈M such that sup x∈K ‖f ( x ) − f∗ ( x ) ‖ < . ( 1 ) If not noted otherwise , universality refers to the sup-universal property . 2.1.4 BASIC ALGEBRA . Here , we introduce the basic definition of important algebraic structures in this study . Let G be a set and ◦ : G × G → G be a binary operation . Below , we review four properties to define Abelian semigroups and groups . Associativity For any x , y , z ∈ G , ( x ◦ y ) ◦ z = x ◦ ( y ◦ z ) . Identity Element There exists an element e ∈ G , called the identity element , such that for any x ∈ G , x ◦ e = e ◦ x = x. Inverse Element For any x ∈ G , there exists an element x−1 ∈ G , called the inverse element of x , such that x ◦ x−1 = x−1 ◦ x = e. Commutativity For any x , y ∈ G , x ◦ y = y ◦ x . Table 1 shows which properties are required in each algebraic structure . A semigroup only requires associativity to the binary operation . A group is a semigroup with an identity element and inverse elements . An Abelian ( semi ) group is a ( semi ) group with commutativity . 2.2 INVERTIBLE NEURAL NETWORKS . Invertible neural networks are neural networks that approximate invertible functions Rd → Rd . Here , we review some existing studies for multi-dimensional case , i.e. , d ≥ 2 , and singledimensional case , i.e. , d = 1 . 2.2.1 NORMALIZING FLOWS . Multi-dimensional invertible neural networks have been studied mainly in the context of normalizing flows ( Tabak & Vanden-Eijnden , 2010 ) , which iteratively apply invertible functions to a simple original probability distribution to express complex probability distributions ( Kobyzev et al. , 2020 ; Papamakarios et al. , 2019 ) . There have been many variants proposed including residual flows ( Behrmann et al. , 2019 ) , neural ODEs ( Chen et al. , 2018 ) , and autoregressive flows ( Kingma et al. , 2017 ) . Here we review affine coupling flows ( Dinh et al. , 2015 ) , one of the most popular models with parallelizable efficient inverse computation . Each layer of the affine coupling flows maps x = ( x1 , . . . , xd ) ∈ Rd to y = ( y1 , . . . , yd ) ∈ Rd such that { y≤k = x≤k , y > k = x > k ⊗ exp ( α ( x≤k ) ) + β ( x≤k ) , ( 2 ) where exp is applied elementwise and α , β : Rk → Rd−k are trainable functions . The inverse is computed as follows : { x≤k = y≤k , x > k = ( y > k − β ( y≤k ) ) ⊗ exp ( −α ( y≤k ) ) . ( 3 ) They are used in many successful applications such as NICE ( Dinh et al. , 2015 ) , Real NVP ( Dinh et al. , 2017 ) , and Glow ( Kingma & Dhariwal , 2018 ) . Although the normalizing flows have a limited form of transform , they still admit universalities on certain classes of functions ( Teshima et al. , 2020 ) . The affine coupling flows are Lp-universal ( weaker condition of universality ) for C2-diffeomorphisms . Some more complex models , including deep sigmoidal flows ( Huang et al. , 2018 ) and sum-of-squares polynomial flows ( Jaini et al. , 2019 ) , are universal for C2-diffeomorphisms . 2.2.2 SINGLE-DIMENSIONAL INVERTIBLE NEURAL NETWORKS . For single-dimensional functions , invertibility is equivalent to strict monotonicity . Monotonic networks ( Sill , 1997 ) model strictly monotonic functions . The monotonically increasing version with K groups and Jk units for k-th group is as follows : f ( x ) = min 1≤k≤K max 1≤j≤Jk exp ( w̃ ( k , j ) ) · x+ b ( k , j ) , ( 4 ) where w̃ ( k , j ) , b ( k , j ) ∈ R are trainable parameters . The monotonic networks are a universal approximator for strictly monotonic differentiable functions . Monotonic rational-quadratic transforms ( Durkan et al. , 2019 ) are another universal model for the single-dimensional case . | This paper introduces a new kind of neural network for multisets of vector inputs. Whereas DeepSets uses a function h(x, y) = g(f(x) + f(y)), the proposed model uses h(x, y) = f^{-1}(f(x) + f(y)), where f is an invertible neural network. It applies this network to the problem of learning word analogies. | SP:76143383d6721e00317c6a3f0b04929f89c8b982 |
Practical Integration via Separable Bijective Networks | 1 INTRODUCTION . Most supervised learning problems operate by training a model on a finite collection , T , of N ( typically paired ) examples , ( x , y ) . The model is updated by comparing its predicted output to the expected output and performing some flavor of stochastic gradient descent based on the comparison and various regularizers . The process is repeated by reexamining the elements of T in a random order , possibly with augmentation , until the model parameters converge or an iteration budget is exceeded . This relatively simple procedure has proven to be remarkably effective in a variety of domains and these models have begun to permeate every aspect of modern science and everyday life ( Brown et al. , 2020 ; He et al. , 2016 ; Silver et al. , 2017 ) . The deep learning revolution has also resulted in highly effective generative models such as VAEs ( Kingma & Welling , 2014 ) , GANs ( Goodfellow et al. , 2014 ) , and tractable likelihood models ( Dinh et al. , 2017 ; Grathwohl et al. , 2019 ) . These models are largely used to create novel samples of impressive quality . In addition to sampling , tractable likelihood models also provide an estimate of the probability density function of the data which can be used for additional , downstream processes . We augment the training process by constructing neural networks that allow for tractable integration over the input domain . This differs from implicit layers which utilize integration over a parameterized variable ( Chen et al. , 2018 ; Grathwohl et al. , 2019 ) . Access to fast and differentiable integrals allows us to regularize a model ’ s average behavior using metrics that may not be available otherwise . Integration over the input space also allows us to supervise how the model behaves in continuous regions that are not directly observed in T and may even be out-of-distribution ( OOD ) . Alternative methods attempt to supervise examples outside of T by performing random perturbations ( Gutmann & Hyvärinen , 2010 ) , along the line between known examples ( Zhang et al. , 2018 ) , or via a generative process ( Zenati et al. , 2018 ; Akcay et al. , 2018 ) . However , these methods are only capable of observing a small quantity of the total space . By integrating over the entire regions , it is possible to observe a large portion of the space based on statistical relevance . Main Contributions We propose a general architecture that enables tractable integration over the input space , enabling supervision and custom regularization over continuous regions . We demonstrate how to construct this network and how it allows for a reduction in the computation cost required for dense numeric integration from exponential in the number of dimensions to linear . We derive several useful integrated formulations over continuous regions . Finally , we explore the impact of this architecture and regularizers on the standard accuracy and robustness to OOD examples on several standard classification datasets . Notation Throughout this work , we consider theM dimensional input features , x = [ x1 , ... , xM ] ∈ RM ; the latent features , z ∈ Z ⊆ RM ; and the K-wise classification probability , y . The input features x the training set , Tx , are drawn from the in-distribution data , D ⊆ RM . Subscripts represent a particular dimension , e.g. , Dm corresponds to the mth dimension of the space . Paranthetical superscripts represent the subspace corresponding to a particular class , e.g. , D ( c ) is the subset of D where the data belongs to class c. The bijective network is given as h such that h : D → Z . Probability distributions overD and Z are given by p with the corresponding subscript . Classification networks are given as f and g. Variables with a “ hat , ” ŷ , are predictions of the true quantity , y . 2 MOTIVATION . Neural networks are highly effective function approximators between two ( typically ) continuous spaces : f : X → Y . However , networks are typically trained and evaluated using a finite collection of points without any explicit assessment of the complete hypervolume spanned by the data . This omission is understandable from an implementation perspective as the number of samples required to obtain a reasonable estimate over a volume scales exponentially with data dimensionality . However , human beings often have an understanding of how a process should behave on average . Ideally , we would like to embed this intuition into the model but currently can not assess the average performance of a trained model outside of the held-out test set . Specifically , we would like to regularize the model by estimating the expected behavior of some metric , Ω , produced by the model over the training data Ex∼p ( x ) [ Ω ( ŷ ( x ) ) ] = ∫ X Ω ( ŷ ( x ) ) p ( x ) dx . ( 1 ) There are many useful choices of Ω over a variety of applications . If it is known what the model output should be on average ( ȳ ) , we can construct Ω to encourage that behavior , e.g. , Ω ( ŷ ) = ( ȳ− ŷ ) 2 . Minimizing consistency metrics ( Xie et al. , 2020 ) are a common method to improve learning in label-starved problems . These encourage the model to produce similar outputs over neighborhoods around ( labeled ) examples from Tx where neighborhoods are created by random or domain-specific augmentations . This process can be viewed as an approximation to an integral over the neighborhood , E ∼p ( ) [ L ( y , ŷ ( x + ) ) ] = ∫ L ( y , ŷ ( x + ) ) p ( ) d ( 2 ) where L is a distance-like metric , and is the neighborhood . Equation 2 can be generalized to other neighborhoods . We can recast the standard classification problem as a discrete approximation to an integral . Typically , we minimize the cross-entropy between ŷ ( x ; θ ) and y over the model parameters , θ , for all ( x , y ) ∈ T which becomes an integral over class-conditioned distributions , p ( x|c ) , min θ − ∑ x , y∈T ∑ k yk log ( ŷk ( x ; θ ) ) ⇒ min θ − ∑ k ∫ D ( k ) yk log ( ŷk ( x ; θ ) ) p ( x|k ) dx . ( 3 ) Unfortunately , integration in high dimension is difficult . Naive gridded solutions require an exponential number of points with error decreasing as O ( G−M ) , for G , M -dimensional points . Monte Carlo ( MC ) methods theoretically have better performance with error that decreases asO ( G−1/2 ) . However , the rate of convergence for MC methods depends on the variance of samples ( Veach , 1998 ) , which may make for poor approximations in practice . Importance sampling ( Bishop , 2006 ) can improve the performance of Monte Carlo methods to adapt to the regions with the greatest contribution . We choose to model the data using a separable function . Separable functions have the key benefit of decomposing M -dimensional integrals into a combination of M one-dimensional integrals . Each of these one-dimensional integrals can then be solved using any number of highly-accurate solvers ( e.g. , Runge-Kutta ( Shampine , 2005 ) , Dormand-Prince ( Dormand & Prince , 1980 ) , Tsit ( Tsitouras , 2011 ) , etc . ) that have error rates better than O ( G−4 ) but are unavailable in high dimensions . The use of separable functions is a component of the VEGAS algorithm ( Peter Lepage , 1978 ) and is utilized in conjunction with adaptive Monte Carlo sampling to approximate high-dimensional integrals . The use of a separable function over the input space may make estimation of integrals over the model more accessible ; however , they impose a strong , inappropriate , inductive-bias . The obvious solution of using a standard neural network feature extractor and integrating over the learned features means that we would no longer have a valid integrator over the input space . Fortunately , we can solve this problem by including a bijective transform prior to the separable network as a feature extractor . The bijective transform allows us to decouple the data into a latent space where the data can be modeled using a separable network and guarantees equality between integrals in the latent space and integrals in the input space . The bijective model has the added benefit of allowing us to create a tractable likelihood model ( Dinh et al. , 2017 ) over the data which gives us access to p ( x ) ( or possibly p ( x|y ) ) and makes it easier to draw new samples . We discuss bijective and separable functions in Sec . 3 , how they come together in Sec . 4 , and the specific choices we use in Sec . 5 . 3 BACKGROUND . We perform practical integration over the input space by breaking neural network models down into two key components : ( 1 ) a bijective feature extractor , ( 2 ) a separable task network , see Fig . 1 . For the sake of simplicity , we only consider classification tasks in this work . This makes our total network analogous with the common network breakdown where a classifier , often a linear layer or an MLP , is constructed on a feature extractor , such as a CNN . However , unlike the typical process , we must place constraints on both networks so that we can integrate over the input domain . This network breakdown is similar to hybrid networks ( Chen et al. , 2019 ; Nalisnick et al. , 2019b ) except for the separability requirement on the classifier . 3.1 BIJECTIVE NETWORKS . Bijective networks are the key component in flow-based likelihood models . A bijective network , h : D → Z , has a known forward and inverse operation so that data can exactly be reconstructed after the transformation . This allows for exact likelihood estimation via the change of variables formula : z = h ( x ; θ ) , x = h−1 ( z ; θ ) , pX ( x ) = pZ ( h ( x ; θ ) ) ∣∣∣∣∂h∂x ∣∣∣∣ ( 4 ) where pZ is a predefined distribution over the latent space , often a standard Gaussian . These models are trained via Eq . 4 to maximize the likelihood of the examples in T . Once trained , flow-based likelihood models are commonly used as a generative process where samples are drawn from pZ and are then inverted through h to arrive at an example in D. Our goal is not generative , however it is possible for us to use this process once training is complete . Instead , we will take advantage of the fact that we can choose pZ and then utilize it in downstream tasks . Normalizing flow bijectors provide rich feature extractors that can represent a distribution of complicated inputs with simply-distributed , independent features , z . Given the independence and expressibility of these learned independent features , we build estimators using separable functions over z , which enables us to integrate over the data ’ s domain while retaining expressibility . The requirement for bijectivity places a strong constraint on network design , eliminating many common choices due to the need to maintain dimension or invert element-wise activations . Even naive convolutional operations become unavailable since they are not generally invertible . Modern advances have demonstrated methods to work around these limitations through the use of clever partitioning and coupling tricks ( Dinh et al. , 2017 ) or the use of constraints ( Chen et al. , 2019 ) . However , the field of learnable bijective functions is less advanced then its unconstrained counterparts which often results in reduced performance on auxiliary tasks . Flow models are well-studied and We take advantage of several recent architectures to test our methods . In particular , we consider three different bijective models . The simplest models are constructed via stacked affine-coupling layers ( Dinh et al. , 2017 ) and invertible , fully-connected layers . We primarily utilize Glow ’ s ( Kingma & Dhariwal , 2018 ) invertible 1× 1 convolutions to process image data . | This paper introduces a hybrid model architecture that makes it possible to integrate a separable loss function across a region of input space. Such integrals can be used as regularizers for robustness near the observed data points (local consistency), and out-of-distribution (OOD) detection in neighborhoods away from the observations. The paper uses these regularizers to train models that are less vulnerable to adversarial attacks and out of distribution mishaps. | SP:b2da79a6f231ac2b1aae65062f5eb22bdf97cb2d |
Practical Integration via Separable Bijective Networks | 1 INTRODUCTION . Most supervised learning problems operate by training a model on a finite collection , T , of N ( typically paired ) examples , ( x , y ) . The model is updated by comparing its predicted output to the expected output and performing some flavor of stochastic gradient descent based on the comparison and various regularizers . The process is repeated by reexamining the elements of T in a random order , possibly with augmentation , until the model parameters converge or an iteration budget is exceeded . This relatively simple procedure has proven to be remarkably effective in a variety of domains and these models have begun to permeate every aspect of modern science and everyday life ( Brown et al. , 2020 ; He et al. , 2016 ; Silver et al. , 2017 ) . The deep learning revolution has also resulted in highly effective generative models such as VAEs ( Kingma & Welling , 2014 ) , GANs ( Goodfellow et al. , 2014 ) , and tractable likelihood models ( Dinh et al. , 2017 ; Grathwohl et al. , 2019 ) . These models are largely used to create novel samples of impressive quality . In addition to sampling , tractable likelihood models also provide an estimate of the probability density function of the data which can be used for additional , downstream processes . We augment the training process by constructing neural networks that allow for tractable integration over the input domain . This differs from implicit layers which utilize integration over a parameterized variable ( Chen et al. , 2018 ; Grathwohl et al. , 2019 ) . Access to fast and differentiable integrals allows us to regularize a model ’ s average behavior using metrics that may not be available otherwise . Integration over the input space also allows us to supervise how the model behaves in continuous regions that are not directly observed in T and may even be out-of-distribution ( OOD ) . Alternative methods attempt to supervise examples outside of T by performing random perturbations ( Gutmann & Hyvärinen , 2010 ) , along the line between known examples ( Zhang et al. , 2018 ) , or via a generative process ( Zenati et al. , 2018 ; Akcay et al. , 2018 ) . However , these methods are only capable of observing a small quantity of the total space . By integrating over the entire regions , it is possible to observe a large portion of the space based on statistical relevance . Main Contributions We propose a general architecture that enables tractable integration over the input space , enabling supervision and custom regularization over continuous regions . We demonstrate how to construct this network and how it allows for a reduction in the computation cost required for dense numeric integration from exponential in the number of dimensions to linear . We derive several useful integrated formulations over continuous regions . Finally , we explore the impact of this architecture and regularizers on the standard accuracy and robustness to OOD examples on several standard classification datasets . Notation Throughout this work , we consider theM dimensional input features , x = [ x1 , ... , xM ] ∈ RM ; the latent features , z ∈ Z ⊆ RM ; and the K-wise classification probability , y . The input features x the training set , Tx , are drawn from the in-distribution data , D ⊆ RM . Subscripts represent a particular dimension , e.g. , Dm corresponds to the mth dimension of the space . Paranthetical superscripts represent the subspace corresponding to a particular class , e.g. , D ( c ) is the subset of D where the data belongs to class c. The bijective network is given as h such that h : D → Z . Probability distributions overD and Z are given by p with the corresponding subscript . Classification networks are given as f and g. Variables with a “ hat , ” ŷ , are predictions of the true quantity , y . 2 MOTIVATION . Neural networks are highly effective function approximators between two ( typically ) continuous spaces : f : X → Y . However , networks are typically trained and evaluated using a finite collection of points without any explicit assessment of the complete hypervolume spanned by the data . This omission is understandable from an implementation perspective as the number of samples required to obtain a reasonable estimate over a volume scales exponentially with data dimensionality . However , human beings often have an understanding of how a process should behave on average . Ideally , we would like to embed this intuition into the model but currently can not assess the average performance of a trained model outside of the held-out test set . Specifically , we would like to regularize the model by estimating the expected behavior of some metric , Ω , produced by the model over the training data Ex∼p ( x ) [ Ω ( ŷ ( x ) ) ] = ∫ X Ω ( ŷ ( x ) ) p ( x ) dx . ( 1 ) There are many useful choices of Ω over a variety of applications . If it is known what the model output should be on average ( ȳ ) , we can construct Ω to encourage that behavior , e.g. , Ω ( ŷ ) = ( ȳ− ŷ ) 2 . Minimizing consistency metrics ( Xie et al. , 2020 ) are a common method to improve learning in label-starved problems . These encourage the model to produce similar outputs over neighborhoods around ( labeled ) examples from Tx where neighborhoods are created by random or domain-specific augmentations . This process can be viewed as an approximation to an integral over the neighborhood , E ∼p ( ) [ L ( y , ŷ ( x + ) ) ] = ∫ L ( y , ŷ ( x + ) ) p ( ) d ( 2 ) where L is a distance-like metric , and is the neighborhood . Equation 2 can be generalized to other neighborhoods . We can recast the standard classification problem as a discrete approximation to an integral . Typically , we minimize the cross-entropy between ŷ ( x ; θ ) and y over the model parameters , θ , for all ( x , y ) ∈ T which becomes an integral over class-conditioned distributions , p ( x|c ) , min θ − ∑ x , y∈T ∑ k yk log ( ŷk ( x ; θ ) ) ⇒ min θ − ∑ k ∫ D ( k ) yk log ( ŷk ( x ; θ ) ) p ( x|k ) dx . ( 3 ) Unfortunately , integration in high dimension is difficult . Naive gridded solutions require an exponential number of points with error decreasing as O ( G−M ) , for G , M -dimensional points . Monte Carlo ( MC ) methods theoretically have better performance with error that decreases asO ( G−1/2 ) . However , the rate of convergence for MC methods depends on the variance of samples ( Veach , 1998 ) , which may make for poor approximations in practice . Importance sampling ( Bishop , 2006 ) can improve the performance of Monte Carlo methods to adapt to the regions with the greatest contribution . We choose to model the data using a separable function . Separable functions have the key benefit of decomposing M -dimensional integrals into a combination of M one-dimensional integrals . Each of these one-dimensional integrals can then be solved using any number of highly-accurate solvers ( e.g. , Runge-Kutta ( Shampine , 2005 ) , Dormand-Prince ( Dormand & Prince , 1980 ) , Tsit ( Tsitouras , 2011 ) , etc . ) that have error rates better than O ( G−4 ) but are unavailable in high dimensions . The use of separable functions is a component of the VEGAS algorithm ( Peter Lepage , 1978 ) and is utilized in conjunction with adaptive Monte Carlo sampling to approximate high-dimensional integrals . The use of a separable function over the input space may make estimation of integrals over the model more accessible ; however , they impose a strong , inappropriate , inductive-bias . The obvious solution of using a standard neural network feature extractor and integrating over the learned features means that we would no longer have a valid integrator over the input space . Fortunately , we can solve this problem by including a bijective transform prior to the separable network as a feature extractor . The bijective transform allows us to decouple the data into a latent space where the data can be modeled using a separable network and guarantees equality between integrals in the latent space and integrals in the input space . The bijective model has the added benefit of allowing us to create a tractable likelihood model ( Dinh et al. , 2017 ) over the data which gives us access to p ( x ) ( or possibly p ( x|y ) ) and makes it easier to draw new samples . We discuss bijective and separable functions in Sec . 3 , how they come together in Sec . 4 , and the specific choices we use in Sec . 5 . 3 BACKGROUND . We perform practical integration over the input space by breaking neural network models down into two key components : ( 1 ) a bijective feature extractor , ( 2 ) a separable task network , see Fig . 1 . For the sake of simplicity , we only consider classification tasks in this work . This makes our total network analogous with the common network breakdown where a classifier , often a linear layer or an MLP , is constructed on a feature extractor , such as a CNN . However , unlike the typical process , we must place constraints on both networks so that we can integrate over the input domain . This network breakdown is similar to hybrid networks ( Chen et al. , 2019 ; Nalisnick et al. , 2019b ) except for the separability requirement on the classifier . 3.1 BIJECTIVE NETWORKS . Bijective networks are the key component in flow-based likelihood models . A bijective network , h : D → Z , has a known forward and inverse operation so that data can exactly be reconstructed after the transformation . This allows for exact likelihood estimation via the change of variables formula : z = h ( x ; θ ) , x = h−1 ( z ; θ ) , pX ( x ) = pZ ( h ( x ; θ ) ) ∣∣∣∣∂h∂x ∣∣∣∣ ( 4 ) where pZ is a predefined distribution over the latent space , often a standard Gaussian . These models are trained via Eq . 4 to maximize the likelihood of the examples in T . Once trained , flow-based likelihood models are commonly used as a generative process where samples are drawn from pZ and are then inverted through h to arrive at an example in D. Our goal is not generative , however it is possible for us to use this process once training is complete . Instead , we will take advantage of the fact that we can choose pZ and then utilize it in downstream tasks . Normalizing flow bijectors provide rich feature extractors that can represent a distribution of complicated inputs with simply-distributed , independent features , z . Given the independence and expressibility of these learned independent features , we build estimators using separable functions over z , which enables us to integrate over the data ’ s domain while retaining expressibility . The requirement for bijectivity places a strong constraint on network design , eliminating many common choices due to the need to maintain dimension or invert element-wise activations . Even naive convolutional operations become unavailable since they are not generally invertible . Modern advances have demonstrated methods to work around these limitations through the use of clever partitioning and coupling tricks ( Dinh et al. , 2017 ) or the use of constraints ( Chen et al. , 2019 ) . However , the field of learnable bijective functions is less advanced then its unconstrained counterparts which often results in reduced performance on auxiliary tasks . Flow models are well-studied and We take advantage of several recent architectures to test our methods . In particular , we consider three different bijective models . The simplest models are constructed via stacked affine-coupling layers ( Dinh et al. , 2017 ) and invertible , fully-connected layers . We primarily utilize Glow ’ s ( Kingma & Dhariwal , 2018 ) invertible 1× 1 convolutions to process image data . | For input-output pair $(x,y)$, the paper undertakes the task of estimating (*) $E_{x \sim p(x)} \Omega(\hat y(x))$. When $x$ is high dimensional, integration is difficult. Standard ML/DL applies MC to estimate this integral based on an iid sample $(x_i,y_i), i=1,\ldots,n$. This paper suggests there is a better way. The main idea is to recognize that if $f$ is a separable function and $h$ is a bijective function, then (**) $E_{x \sim p(x)} f(h(x))$ is easy to evaluate. | SP:b2da79a6f231ac2b1aae65062f5eb22bdf97cb2d |
Pretraining for Language Conditioned Imitation with Transformers | 1 INTRODUCTION . Whereas most contemporary reinforcement learning ( RL ) methods require programming a reward function , we are interested in training RL agents which understand language , enabling a flexible and interpretable interface and facilitating human-AI interaction . Natural language often carries tremendous amounts of information and can thus both provide strong learning signals and describe complex scenarios . Furthermore , agents which are able to learn from a diverse set of tasks – such as the space of tasks specified by language – can take advantage of broader data for supervision , access more diversity in an environment , and perform more effective few-shot learning . One path towards more general agents is data-driven reinforcement learning , which promises to leverage diverse data for improved generalization in RL ( Levine et al. , 2020 ) , similar to approaches in natural language . Recent approaches to data-driven RL have proposed leveraging powerful sequence models ( Chen et al. , 2021 ; Janner et al. , 2021 ) that are now mainstream in language ( Devlin et al. , 2018 ; Radford et al. , 2019 ) . While initial approaches to training language models on large-scale unsupervised data focused on learning individual word embeddings ( Pennington et al. , 2014 ) , newer models , especially those based on the transformer architecture ( Vaswani et al. , 2017 ) , can also transfer sequential aspects learned during pretraining . Similar work in RL has found great benefit to transferring learned embeddings via a vision encoder ( Yarats et al. , 2021 ) , but transferring sequential behavior knowledge is relatively unstudied . However , using transformer sequence modeling approaches , we can develop methods which can finetune more effectively from offline data . To better study this , we propose a new benchmark and release a new dataset1 , Text-Conditioned Frostbite , in which the agent must perform tasks specified by language instructions in the Atari Frostbite environment ( Bellemare et al. , 2013 ) . Crucially , our dataset consists of a diverse set of tasks , ranging from easier tasks such as “ try to stay on the right side ” to harder tasks such as “ spend as much time as possible on the fourth ice floe ” . Inspired by the proposal from Lake et al . ( 2017 ) , we believe Frostbite is a diverse and dynamic environment requiring temporally extended strategies , whereas traditional evaluations underutilize the environment by finding agents with single , narrow strategies . We focus on Atari Frostbite as it allows for combining this naturalistic language with complex goals and behaviors using a familiar , well-studied environment . We introduce a transformer operating on text-state-action sequences – which we call Text Decision Transformer ( TDT ) – adapting the single-stream Decision Transformer ( Chen et al. , 2021 ) to our setting of conditioning on language goals by replacing the reward tokens with text tokens . We perform empirical evaluations on our environment using the TDT , finding it serves as a reasonable baseline architecture for study , and evaluate performance with unsupervised pretraining . Our contributions are as follows : 1 . We propose a new multimodal benchmark , Text-Conditioned Frostbite , and provide a dataset of 5M labelled transitions with 500 different tasks . 2 . We evaluate a single-stream architecture , Text Decision Transformer , and find it outperforms MLP baselines , highlighting sequence modeling-based approaches . 3 . We investigate unsupervised pretraining , both on language and on trajectories , and find trajectorybased pretraining yields improved performance in low-data settings . Furthermore , we investigate scaling laws , finding the task is complex enough that TDT improves with more data . 2 PRELIMINARIES . 2.1 LANGUAGE-CONDITIONED IMITATION LEARNING . In the problem of Language-Conditioned Imitation Learning , we are provided with a tuple of ( S , A , M , P , R ) i , which correspond to the states , actions , tasks , transition dynamics , and a reward or score function . A trajectory τ = ( s1 , a1 , ... st , at ) is a sequence of states and actions generated by the transition dynamics and the agent ’ s action at every timestep , where t is less than or equal to the maximum episode length . The score function R is prompted with a trajectory τ and a text 1Dataset and code are available at : < github will be made public at publication > , temporary link to dataset and code for reproduction are in supplementary material task m and outputs the degree of success that the trajectory had with respect to the task – note we only use R for evaluation purposes . A policy/model πθ ( at|s≤t , a < t , m ) is tasked with outputting actions which maximize the score function , i.e . Ea∼πθ [ R ( τ , m ) ] . The model has access to a dataset of expert trajectory-language pairs ( τ , m ) for training . 2.2 TRANSFORMERS . Transformers were originally proposed by Vaswani et al . ( 2017 ) and have since become the frontrunner in solving sequence modelling problems . The models are built with sequential self-attention modules . Each self-attention module is given a sequence of n embeddings and outputs another n embeddings with the same dimensions . The self-attention modules work by performing the inner product between the current embedding xi with the query matrixQ to give qi and with the key matrix K to give the key ki . The model takes the inner product between the key and query for j ∈ [ 1 , n ] , yielding n logits , over which the model takes the softmax and yields a probability distribution . The final result is then a convex combination of the value vectors vi , with the weights dictated by the probability distribution . More concisely , the i-th component of the output zi is given by zi = n∑ j=1 softmax ( { 〈qi , kj′〉 } nj′=1 ) j · vj . ( 1 ) 2.3 DECISION TRANSFORMER . Our work builds on Decision Transformer ( DT ) ( Chen et al. , 2021 ) , which uses a GPT-2 ( Radford et al. , 2019 ) transformer architecture , leveraging a causal mask where the i-th prediction depends only on input tokens up to i. DT models a trajectory sequence ( R̂1 , s1 , a1 , . . . , R̂t , st , at ) autoregressively , where R̂t is the returns-to-go from timestep t to the end of the episode . Test-time evaluation proceeds by initializing a behavior specification – DT uses a desired return R̂1 – and alternating generation between states from the environment and actions sampled from the model . To reduce computation , DT only uses the last K timesteps as input to the model , called the context size . A context size of 1 represents a Markovian policy . 3 ENVIRONMENT AND DATASET . The first contribution of our work is in developing a diverse , yet accessible environment for evaluating the performance of text-conditioned imitation models . We build on the proposal from Lake et al . ( 2017 ) , in which the authors propose the “ Frostbite Challenge ” . The authors propose Frostbite is challenging due to the requirement of temporally extended strategies and the diverse landscape of possible tasks to complete . To construct a dataset for this task , we manually collected several hundred tasks and trajectories . These tasks range from “ don ’ t move ” to “ stay as long as you can on the fourth ice floe ” to “ die to a crab on your first life ” and so on . 3.1 ENVIRONMENT DETAILS . Evaluation via RAM state . The first challenge with such an environment is building deterministic evaluation strategies . For this reason , evaluation of the agent is performed by accessing the RAM state of the game , calculating the level , position , score , and other related quantities of the agent , and comparing them with the desired outcomes . This evaluation loop runs after a trajectory is completed and can be prompted with a number of different desired tasks . The interface is malleable enough to even generate rewards for specific tasks , though we have not yet conducted such a study . We use the underlying state-space of the game to calculate specific metrics in the environment . These metrics are then normalized to be in the range ( 0 , 1 ) , so no task gets unequal weighting . For example , in a task like “ spend as much time as possible on the first ice floe ” , the assigned success is simply ( number of timesteps on 1st ice floe ) / ( number of timesteps ) . Performance measurement . Performance of the model is measured by prompting the model with several different tasks and subsequently running the evaluation procedure on the generated trajectories . The score is normalized to be in the range [ 0 , 1 ] and is designed to be a success metric . If the model receives a score of 1 on a generated trajectory , then the model has succeeded in obeying the task , while a score of 0 indicates a failure . 3.2 DATASET DETAILS . Overview . The dataset consists of more than 700 labelled missions , for a total of 5m timesteps , each consisting of the current task , the current state , and the action taken . The Atari environment supports 18 discrete actions corresponding to directional movement and interaction . Game frames are resized and converted to black-white . The environment does not explicitly perform frame-stacking , as our agent has access to context , but frame-stacking can be implemented with the interface . We list some example tasks in Table 1 and include more in the Appendix . Collection . We collected several hundred tasks and trajectories by hand . These trajectories were generated by taking advantage of the Atari-Gym user interface and playing Frostbite with keyboard controls . At the beginning of every task , users are prompted with a task that they will try to achieve , and their states , actions , and tasks are recorded and saved . Task categorizations . For evaluation , a subset of the collected tasks are supported and separated into distinct task suites depending on the degree of control necessary to perform the task . For instance , in the easy task suite , there are tasks such as “ dont move , ” “ die on the first level , ” and “ stay on the left side. ” Such tasks are quite simple and do not require advanced visual perception or control of the agent . In the medium task suite , tasks require higher levels of perception , but are less demanding than the hard suite . The medium task suite contains “ get as close to 1000 score as you can , ” “ reach level 2 , ” and “ jump between the second and third ice floes. ” The hard suite contains tasks that require the most advanced control , such as “ reach level 5 ” and “ spend as much time as possible on the fourth ice floe. ” The task suites include only tasks that support RAM score evaluation . A list of further training tasks can be found in the Appendix ( see Table 3 ) . 4 METHOD . Here , we detail the architecture and modifications of the Text-Conditioned Decision Transformer . This model uses a causal transformer architecture in the style of GPT-2 ( Radford et al. , 2019 ) to model sequences jointly consisting of text and trajectories . | This paper presents a new multimodal benchmark for language conditioned RL settings, where an agent must complete tasks specified by text instructions in the Atari Frostbite environment. Their benchmark provides a dataset of 5M text-labeled transitions for training. Finally, the authors propose a model for language conditioned RL settings based on Transformer architecture as a baseline. | SP:396758010e3aaccd1f0befef2cbad6c2b2d03a06 |
Pretraining for Language Conditioned Imitation with Transformers | 1 INTRODUCTION . Whereas most contemporary reinforcement learning ( RL ) methods require programming a reward function , we are interested in training RL agents which understand language , enabling a flexible and interpretable interface and facilitating human-AI interaction . Natural language often carries tremendous amounts of information and can thus both provide strong learning signals and describe complex scenarios . Furthermore , agents which are able to learn from a diverse set of tasks – such as the space of tasks specified by language – can take advantage of broader data for supervision , access more diversity in an environment , and perform more effective few-shot learning . One path towards more general agents is data-driven reinforcement learning , which promises to leverage diverse data for improved generalization in RL ( Levine et al. , 2020 ) , similar to approaches in natural language . Recent approaches to data-driven RL have proposed leveraging powerful sequence models ( Chen et al. , 2021 ; Janner et al. , 2021 ) that are now mainstream in language ( Devlin et al. , 2018 ; Radford et al. , 2019 ) . While initial approaches to training language models on large-scale unsupervised data focused on learning individual word embeddings ( Pennington et al. , 2014 ) , newer models , especially those based on the transformer architecture ( Vaswani et al. , 2017 ) , can also transfer sequential aspects learned during pretraining . Similar work in RL has found great benefit to transferring learned embeddings via a vision encoder ( Yarats et al. , 2021 ) , but transferring sequential behavior knowledge is relatively unstudied . However , using transformer sequence modeling approaches , we can develop methods which can finetune more effectively from offline data . To better study this , we propose a new benchmark and release a new dataset1 , Text-Conditioned Frostbite , in which the agent must perform tasks specified by language instructions in the Atari Frostbite environment ( Bellemare et al. , 2013 ) . Crucially , our dataset consists of a diverse set of tasks , ranging from easier tasks such as “ try to stay on the right side ” to harder tasks such as “ spend as much time as possible on the fourth ice floe ” . Inspired by the proposal from Lake et al . ( 2017 ) , we believe Frostbite is a diverse and dynamic environment requiring temporally extended strategies , whereas traditional evaluations underutilize the environment by finding agents with single , narrow strategies . We focus on Atari Frostbite as it allows for combining this naturalistic language with complex goals and behaviors using a familiar , well-studied environment . We introduce a transformer operating on text-state-action sequences – which we call Text Decision Transformer ( TDT ) – adapting the single-stream Decision Transformer ( Chen et al. , 2021 ) to our setting of conditioning on language goals by replacing the reward tokens with text tokens . We perform empirical evaluations on our environment using the TDT , finding it serves as a reasonable baseline architecture for study , and evaluate performance with unsupervised pretraining . Our contributions are as follows : 1 . We propose a new multimodal benchmark , Text-Conditioned Frostbite , and provide a dataset of 5M labelled transitions with 500 different tasks . 2 . We evaluate a single-stream architecture , Text Decision Transformer , and find it outperforms MLP baselines , highlighting sequence modeling-based approaches . 3 . We investigate unsupervised pretraining , both on language and on trajectories , and find trajectorybased pretraining yields improved performance in low-data settings . Furthermore , we investigate scaling laws , finding the task is complex enough that TDT improves with more data . 2 PRELIMINARIES . 2.1 LANGUAGE-CONDITIONED IMITATION LEARNING . In the problem of Language-Conditioned Imitation Learning , we are provided with a tuple of ( S , A , M , P , R ) i , which correspond to the states , actions , tasks , transition dynamics , and a reward or score function . A trajectory τ = ( s1 , a1 , ... st , at ) is a sequence of states and actions generated by the transition dynamics and the agent ’ s action at every timestep , where t is less than or equal to the maximum episode length . The score function R is prompted with a trajectory τ and a text 1Dataset and code are available at : < github will be made public at publication > , temporary link to dataset and code for reproduction are in supplementary material task m and outputs the degree of success that the trajectory had with respect to the task – note we only use R for evaluation purposes . A policy/model πθ ( at|s≤t , a < t , m ) is tasked with outputting actions which maximize the score function , i.e . Ea∼πθ [ R ( τ , m ) ] . The model has access to a dataset of expert trajectory-language pairs ( τ , m ) for training . 2.2 TRANSFORMERS . Transformers were originally proposed by Vaswani et al . ( 2017 ) and have since become the frontrunner in solving sequence modelling problems . The models are built with sequential self-attention modules . Each self-attention module is given a sequence of n embeddings and outputs another n embeddings with the same dimensions . The self-attention modules work by performing the inner product between the current embedding xi with the query matrixQ to give qi and with the key matrix K to give the key ki . The model takes the inner product between the key and query for j ∈ [ 1 , n ] , yielding n logits , over which the model takes the softmax and yields a probability distribution . The final result is then a convex combination of the value vectors vi , with the weights dictated by the probability distribution . More concisely , the i-th component of the output zi is given by zi = n∑ j=1 softmax ( { 〈qi , kj′〉 } nj′=1 ) j · vj . ( 1 ) 2.3 DECISION TRANSFORMER . Our work builds on Decision Transformer ( DT ) ( Chen et al. , 2021 ) , which uses a GPT-2 ( Radford et al. , 2019 ) transformer architecture , leveraging a causal mask where the i-th prediction depends only on input tokens up to i. DT models a trajectory sequence ( R̂1 , s1 , a1 , . . . , R̂t , st , at ) autoregressively , where R̂t is the returns-to-go from timestep t to the end of the episode . Test-time evaluation proceeds by initializing a behavior specification – DT uses a desired return R̂1 – and alternating generation between states from the environment and actions sampled from the model . To reduce computation , DT only uses the last K timesteps as input to the model , called the context size . A context size of 1 represents a Markovian policy . 3 ENVIRONMENT AND DATASET . The first contribution of our work is in developing a diverse , yet accessible environment for evaluating the performance of text-conditioned imitation models . We build on the proposal from Lake et al . ( 2017 ) , in which the authors propose the “ Frostbite Challenge ” . The authors propose Frostbite is challenging due to the requirement of temporally extended strategies and the diverse landscape of possible tasks to complete . To construct a dataset for this task , we manually collected several hundred tasks and trajectories . These tasks range from “ don ’ t move ” to “ stay as long as you can on the fourth ice floe ” to “ die to a crab on your first life ” and so on . 3.1 ENVIRONMENT DETAILS . Evaluation via RAM state . The first challenge with such an environment is building deterministic evaluation strategies . For this reason , evaluation of the agent is performed by accessing the RAM state of the game , calculating the level , position , score , and other related quantities of the agent , and comparing them with the desired outcomes . This evaluation loop runs after a trajectory is completed and can be prompted with a number of different desired tasks . The interface is malleable enough to even generate rewards for specific tasks , though we have not yet conducted such a study . We use the underlying state-space of the game to calculate specific metrics in the environment . These metrics are then normalized to be in the range ( 0 , 1 ) , so no task gets unequal weighting . For example , in a task like “ spend as much time as possible on the first ice floe ” , the assigned success is simply ( number of timesteps on 1st ice floe ) / ( number of timesteps ) . Performance measurement . Performance of the model is measured by prompting the model with several different tasks and subsequently running the evaluation procedure on the generated trajectories . The score is normalized to be in the range [ 0 , 1 ] and is designed to be a success metric . If the model receives a score of 1 on a generated trajectory , then the model has succeeded in obeying the task , while a score of 0 indicates a failure . 3.2 DATASET DETAILS . Overview . The dataset consists of more than 700 labelled missions , for a total of 5m timesteps , each consisting of the current task , the current state , and the action taken . The Atari environment supports 18 discrete actions corresponding to directional movement and interaction . Game frames are resized and converted to black-white . The environment does not explicitly perform frame-stacking , as our agent has access to context , but frame-stacking can be implemented with the interface . We list some example tasks in Table 1 and include more in the Appendix . Collection . We collected several hundred tasks and trajectories by hand . These trajectories were generated by taking advantage of the Atari-Gym user interface and playing Frostbite with keyboard controls . At the beginning of every task , users are prompted with a task that they will try to achieve , and their states , actions , and tasks are recorded and saved . Task categorizations . For evaluation , a subset of the collected tasks are supported and separated into distinct task suites depending on the degree of control necessary to perform the task . For instance , in the easy task suite , there are tasks such as “ dont move , ” “ die on the first level , ” and “ stay on the left side. ” Such tasks are quite simple and do not require advanced visual perception or control of the agent . In the medium task suite , tasks require higher levels of perception , but are less demanding than the hard suite . The medium task suite contains “ get as close to 1000 score as you can , ” “ reach level 2 , ” and “ jump between the second and third ice floes. ” The hard suite contains tasks that require the most advanced control , such as “ reach level 5 ” and “ spend as much time as possible on the fourth ice floe. ” The task suites include only tasks that support RAM score evaluation . A list of further training tasks can be found in the Appendix ( see Table 3 ) . 4 METHOD . Here , we detail the architecture and modifications of the Text-Conditioned Decision Transformer . This model uses a causal transformer architecture in the style of GPT-2 ( Radford et al. , 2019 ) to model sequences jointly consisting of text and trajectories . | This work proposes a new instruction-following reinforcement learning environment based on the Atari Frostbite environment. They experiment with a modified decision transformer on the environment. The results largely show that performance improves with longer context, more pretraining, and more data. | SP:396758010e3aaccd1f0befef2cbad6c2b2d03a06 |
Personalized Neural Architecture Search for Federated Learning | 1 INTRODUCTION . Federated Learning ( FL ) ( McMahan et al. , 2017 ) is a variant of distributed learning where the objective function can be decomposed into a linear combination of M local objective functions . Each function depends on its private data hosted by a local client and a set of shared parameters w , argmin w L ( w ) ≡ argmin w M∑ i=1 Li ( w | Di ) , ( 1 ) where Di denotes the ith local training dataset comprising input-output tuples ( x , y ) . In a standard supervised learning task where the predictive model is modeled as a fixed deep neural network ψ with learnable weights w , let ` ( x , y ) denote the loss incurred by predicting ψ ( x ; w ) when the true output is y . The expected loss of ψ ( x ; w ) on Di is given as Li ( w | Di ) = E ( x , y ) ∼Di [ ` ( x , y ; ψ ) ] . ( 2 ) This is not applicable to scenarios where local models are expected to solve different tasks which are similar in broad sense yet diverge in finer details . For example , consider the task of recognizing the outcome of a coin flip given images collected by two clients : one capture the coin from above , the other from below . This setting implies that when the same input image is provided by both clients , the correct classifications must be the opposite of one another . However , since existing FL methods converge on a single model architecture and weight , there can only be one predictive outcome which can not satisfy both tasks . To relax this constraint , the recent work of Fallah et al . ( 2020 ) extends FL by incorporating ideas from meta learning ( Finn et al. , 2017 ) which results in a new framework of personalized FL . The new framework can accommodate for such task heterogeneity but still requires all client models to agree on a single architecture beforehand , which is sub-optimal . To address this shortcoming , one naive idea is to adopt existing ideas in Neural Architecture Search ( NAS ) via Reinforcement Learning ( Zoph and Le , 2016 ; Pham et al. , 2018 ) which act as an outer loop to the existing FL routine . However , this simple approach does not allow client models to adapt to local tasks on an architecture level and is often not preferred due to the cost of repeated FL training . This paper proposes a novel personalized NAS algorithm for federated learning , which generalizes ideas in respective areas of NAS ( Zoph and Le , 2016 ; Pham et al. , 2018 ) originally developed for single-task scenarios , and FL ( Fallah et al. , 2020 ) under a unified len of federated personalized neural architecture search ( FEDPNAS ) . In particular , to customize the model architecture for each task in the FL workflow , FEDPNAS first represents the model architecture for each task as a sub-network sampled from a large , overparameterized network . The sampling distribution is ( collaboratively ) learned along with the parameters of the sampled network via a generalization of the recently proposed Discrete Stochastic NAS ( DSNAS ) method ( Hu et al. , 2020 ) . Unlike DSNAS , which lacks the ability to customize architecture for individual tasks , our generalized FEDPNAS incorporates model personalization on an architecture level . Our contributions include : 1 . A novel architecture that factorizes into a base component ( shared across tasks ) and a personalizable component , which respectively capture the task-agnostic and task-specific information ( Section 3.2 ) . 2 . A context-aware sampling distribution conditioned on specific task instance , which captures taskspecific information and naturally incorporates personalization into architecture search ( Section 3.4 ) . 3 . An FL algorithm that optimizes for a common architecture , followed by a personalization phase where each client subsequently adapts only the personalized component to fit its own task via finetuning with local data ( Section 3.1 ) . To ensure that the common architecture distribution converges at a vantage point that is relevant and beneficial to all clients , we generalize the vanilla FL objective in Eq . equation 1 such that local gradient steps directly optimize for expected improvement resulting from future fine-tuning ( Section 3.3 ) . 4 . A theoretical perspective on our FL objective ( Section 3.5 and thorough empirical analysis showing significant performance gain compared to state-of-the-art FL and NAS methods ( Section 4 ) . 2 RELATED WORKS . 2.1 TWO-STAGE NEURAL ARCHITECTURE SEARCH . Most existing NAS frameworks separately optimize for the optimal architecture and its parameters in two stages : searching and evaluation . The former stage usually employs evolutionary-based strategies ( Floreano et al. , 2008 ; Real et al. , 2019 ) , Bayesian optimization surrogates ( Bergstra et al. , 2013 ; Hu et al. , 2018 ) or Reinforcement Learning controllers ( Baker et al. , 2016 ; Zoph and Le , 2016 ; Pham et al. , 2018 ) to propose candidate architectures based on random mutations and/or observed experience ; while the latter optimizes the parameters of these architectures given task data and provide feedback to improve the search agent . Naturally , an extension of such methods to the FL setting is through distributing the evaluation workload over many clients , which does not require exposing private data . In practice , however , two-stage federated NAS frameworks are generally not suitable for the personalized FL setting for two reasons : ( a ) the clients often lack the computational capacity to repeatedly optimize the parameters for many candidate architectures ; and ( b ) the clients have to converge on a single architecture proposed by the central search agent . 2.2 DISCRETE STOCHASTIC NEURAL ARCHITECTURE SEARCH . Discrete stochastic neural architecture search ( DSNAS ) ( Hu et al. , 2020 ) addresses the computational issue of two-stage NAS by jointly optimizing the optimal architecture and its weight in an end-to-end fashion , which allows users to continually train a single network on demand over time as opposed to performing full parameter optimization for every candidate until a good architecture is discovered . The main idea of DSNAS is to combine weight training for an over-parameterized master architecture with discrete computational path sampling . DSNAS parameterizes the master architecture as a stack of modular cells : ψ ( x ) = ψC ◦ ψC−1 · · · ◦ ψ1 ( x ) 1 , where x is an arbitrary input , C is the number of cells , ψt denotes the tth cell in the stack , and ◦ denotes the compositional operator . The inner computation of ψt is in turn characterized by a directed acyclic graph ( DAG ) with V nodes { vi } |V |i=1 , where each node represents some intermediate feature map . For each directed edge ( vi , vj ) , 1Although each DSNAS cell receives the outputs of two previous cells as inputs , we simplify the number to one for ease of notation . there is an associated list of D possible network operations Oij = [ o1ij , o 2 ij . . . o D ij ] 2 where each operation okij transforms vi to vj . Here , v1 corresponds to the output of previous cell ψt−1 ( or input x when t = 1 ) . We recursively define intermediate nodes vj = ∑j−1 i=1 Z > ijOij ( vi ) , where Oij ( vi ) , [ o1ij ( vi ) , o 2 ij ( vi ) . . . o D ij ( vi ) ] and Zij is a one-hot vector sampled from the categorical distribution p ( Z | Π ) where the event probabilities Π = { π1 , π2 , . . . , πD | ∑D i=1 πi = 1 } are learnable . Essentially , learning the distribution p ( Z ) allows us to sample computational paths or sub-graphs of the original DAG that correspond to high-performing , compact architecture from the over-parameterized master network . Sampling discrete random variables from p ( Z ) , however , does not result in a gradient amenable to back-propagation . To sidestep this issue , DSNAS adopts the straight-through Gumbel-softmax trick ( Jang et al. , 2016 ) , which re-parameterizes the kth index of the one-hot variable as Zij [ k ] = I ( k , argmax t [ gt + log πt ] ) , where gt ∼ Gumbel ( 0 , 1 ) . While this forward computation does not have a gradient by itself , we can estimate the gradient through a proxy during the backward pass : ∇ Zij [ k ] ' ∇ Z̃ij [ k ] , ∇ ( exp ( ( gk + log πk ) /τ ) ∑D t=1 exp ( ( gt + log πt ) /τ ) ) ( 3 ) which is unbiased when converged as the temperature τ is steadily annealed to 0 ( Jang et al. , 2016 ) . This formulation , however , is not easily extended to the FL setting , especially when local tasks are not homogeneous . The key challenges in doing so are described in Section 3 , together with our proposed approaches . 3 PERSONALIZED NAS FOR FEDERATED LEARNING . 3.1 FEDERATED LEARNING OF DSNAS . Let W denote the concatenated weights of all network operations in the network architecture . The set up above of DSNAS ( Jang et al. , 2016 ) is then naïvely extendable to a FL setting via the following objective formulation : argmin W , Π L ( W , Π ) ≡ argmin W , Π 1 M M∑ i=1 Li ( W , Π | Di ) . ( 4 ) McMahan et al . ( 2017 ) optimizes this objective by alternating between ( a ) central agent broadcasting aggregated weights to local clients and ( b ) local clients sending gradient descent updated weights ( given local data ) to the central agent for aggregation . This , however , implies that after the last 2We drop the cell index in the definition operation for ease of notation . central aggregation step , all clients will follow the same architecture distribution induced by the final broadcasted copy of W and Π . As previously argued , this is not optimal in a heterogenous task setting which requires task-specific adaptation for local clients to achieve good performance . Furthermore , having the same sampling distribution p ( Z ) regardless of context ( i.e. , feature maps received as cell input ) limits the architecture discovery to those that perform reasonably on average over the entire dataset . However , we remark that restricting the architecture to be the same for every input samples is unnecessary and undermines the expressiveness of an over-parameterized search space . On the other hand , letting the architecture be determined on a per-sample basis makes better use of the search space and potentially improves the predictive performance . The focus of this work is therefore to incorporate both task-wise and context-wise personalization to federated neural architecture search in multitask scenarios , which is achieved through our proposed algorithm FEDPNAS . In general , FEDPNAS functions similarly to the vanilla FEDDSNAS algorithm described above , with an addition of a fine-tuning phase at each local client after the FL phase to adapt the aggregated common model for local task data , as shown in Fig . 1 . To make this work , however , we need to address the following key challenges : C1 . First , as previously argued in Section 1 , tasks across federated clients tend to share similarities in broad sense , and diverge in finer details . A good federated personalization search space , therefore , need to capture this fundamental observation through design and appropriate resource distribution . We address this challenge in Section 3.2 . C2 . Second , a major advantage of having an over-parameterized architecture search space is the flexibility of having specific computation paths for different samples , which is not exploited by DSNAS as reflected in its choice of context-independent sampling distribution p ( Z ) . To address this , Section 3.4 proposes a novel parameterization of p ( Z ) to incorporate context information into operator sampling . C3 . Last , while the fine-tuning phase is designed to incorporate task-personalization , there is no guarantee that the common model can be quickly adapted to client tasks ( Fallah et al. , 2020 ) . The common model may end up in a localization that favors one client over another , which makes it difficult for the latter to fine-tune . To address this concern , Section 3.3 proposes a new personalized federated NAS objective inspired by Finn et al . ( 2017 ) to optimize the common model in anticipation of further fine-tuning by the client models . | This paper proposes a personalized neural architecture search algorithm (FEDPNAS) for FL. FEDPNAS searches for an architecture with a base component (shared across clients) and a personalized component. It also uses a context-aware operator sampler to learn a sampling distribution for feature maps. It provides a theoretical analysis of the FL objective and empirically demonstrates that FEDPNAS outperforms FedAvg and FedDSNAS over image recognition tasks on CIFAR-10 and MNIST datasets. | SP:e16c21331906c3dad479600864c3491d89ab67a5 |
Personalized Neural Architecture Search for Federated Learning | 1 INTRODUCTION . Federated Learning ( FL ) ( McMahan et al. , 2017 ) is a variant of distributed learning where the objective function can be decomposed into a linear combination of M local objective functions . Each function depends on its private data hosted by a local client and a set of shared parameters w , argmin w L ( w ) ≡ argmin w M∑ i=1 Li ( w | Di ) , ( 1 ) where Di denotes the ith local training dataset comprising input-output tuples ( x , y ) . In a standard supervised learning task where the predictive model is modeled as a fixed deep neural network ψ with learnable weights w , let ` ( x , y ) denote the loss incurred by predicting ψ ( x ; w ) when the true output is y . The expected loss of ψ ( x ; w ) on Di is given as Li ( w | Di ) = E ( x , y ) ∼Di [ ` ( x , y ; ψ ) ] . ( 2 ) This is not applicable to scenarios where local models are expected to solve different tasks which are similar in broad sense yet diverge in finer details . For example , consider the task of recognizing the outcome of a coin flip given images collected by two clients : one capture the coin from above , the other from below . This setting implies that when the same input image is provided by both clients , the correct classifications must be the opposite of one another . However , since existing FL methods converge on a single model architecture and weight , there can only be one predictive outcome which can not satisfy both tasks . To relax this constraint , the recent work of Fallah et al . ( 2020 ) extends FL by incorporating ideas from meta learning ( Finn et al. , 2017 ) which results in a new framework of personalized FL . The new framework can accommodate for such task heterogeneity but still requires all client models to agree on a single architecture beforehand , which is sub-optimal . To address this shortcoming , one naive idea is to adopt existing ideas in Neural Architecture Search ( NAS ) via Reinforcement Learning ( Zoph and Le , 2016 ; Pham et al. , 2018 ) which act as an outer loop to the existing FL routine . However , this simple approach does not allow client models to adapt to local tasks on an architecture level and is often not preferred due to the cost of repeated FL training . This paper proposes a novel personalized NAS algorithm for federated learning , which generalizes ideas in respective areas of NAS ( Zoph and Le , 2016 ; Pham et al. , 2018 ) originally developed for single-task scenarios , and FL ( Fallah et al. , 2020 ) under a unified len of federated personalized neural architecture search ( FEDPNAS ) . In particular , to customize the model architecture for each task in the FL workflow , FEDPNAS first represents the model architecture for each task as a sub-network sampled from a large , overparameterized network . The sampling distribution is ( collaboratively ) learned along with the parameters of the sampled network via a generalization of the recently proposed Discrete Stochastic NAS ( DSNAS ) method ( Hu et al. , 2020 ) . Unlike DSNAS , which lacks the ability to customize architecture for individual tasks , our generalized FEDPNAS incorporates model personalization on an architecture level . Our contributions include : 1 . A novel architecture that factorizes into a base component ( shared across tasks ) and a personalizable component , which respectively capture the task-agnostic and task-specific information ( Section 3.2 ) . 2 . A context-aware sampling distribution conditioned on specific task instance , which captures taskspecific information and naturally incorporates personalization into architecture search ( Section 3.4 ) . 3 . An FL algorithm that optimizes for a common architecture , followed by a personalization phase where each client subsequently adapts only the personalized component to fit its own task via finetuning with local data ( Section 3.1 ) . To ensure that the common architecture distribution converges at a vantage point that is relevant and beneficial to all clients , we generalize the vanilla FL objective in Eq . equation 1 such that local gradient steps directly optimize for expected improvement resulting from future fine-tuning ( Section 3.3 ) . 4 . A theoretical perspective on our FL objective ( Section 3.5 and thorough empirical analysis showing significant performance gain compared to state-of-the-art FL and NAS methods ( Section 4 ) . 2 RELATED WORKS . 2.1 TWO-STAGE NEURAL ARCHITECTURE SEARCH . Most existing NAS frameworks separately optimize for the optimal architecture and its parameters in two stages : searching and evaluation . The former stage usually employs evolutionary-based strategies ( Floreano et al. , 2008 ; Real et al. , 2019 ) , Bayesian optimization surrogates ( Bergstra et al. , 2013 ; Hu et al. , 2018 ) or Reinforcement Learning controllers ( Baker et al. , 2016 ; Zoph and Le , 2016 ; Pham et al. , 2018 ) to propose candidate architectures based on random mutations and/or observed experience ; while the latter optimizes the parameters of these architectures given task data and provide feedback to improve the search agent . Naturally , an extension of such methods to the FL setting is through distributing the evaluation workload over many clients , which does not require exposing private data . In practice , however , two-stage federated NAS frameworks are generally not suitable for the personalized FL setting for two reasons : ( a ) the clients often lack the computational capacity to repeatedly optimize the parameters for many candidate architectures ; and ( b ) the clients have to converge on a single architecture proposed by the central search agent . 2.2 DISCRETE STOCHASTIC NEURAL ARCHITECTURE SEARCH . Discrete stochastic neural architecture search ( DSNAS ) ( Hu et al. , 2020 ) addresses the computational issue of two-stage NAS by jointly optimizing the optimal architecture and its weight in an end-to-end fashion , which allows users to continually train a single network on demand over time as opposed to performing full parameter optimization for every candidate until a good architecture is discovered . The main idea of DSNAS is to combine weight training for an over-parameterized master architecture with discrete computational path sampling . DSNAS parameterizes the master architecture as a stack of modular cells : ψ ( x ) = ψC ◦ ψC−1 · · · ◦ ψ1 ( x ) 1 , where x is an arbitrary input , C is the number of cells , ψt denotes the tth cell in the stack , and ◦ denotes the compositional operator . The inner computation of ψt is in turn characterized by a directed acyclic graph ( DAG ) with V nodes { vi } |V |i=1 , where each node represents some intermediate feature map . For each directed edge ( vi , vj ) , 1Although each DSNAS cell receives the outputs of two previous cells as inputs , we simplify the number to one for ease of notation . there is an associated list of D possible network operations Oij = [ o1ij , o 2 ij . . . o D ij ] 2 where each operation okij transforms vi to vj . Here , v1 corresponds to the output of previous cell ψt−1 ( or input x when t = 1 ) . We recursively define intermediate nodes vj = ∑j−1 i=1 Z > ijOij ( vi ) , where Oij ( vi ) , [ o1ij ( vi ) , o 2 ij ( vi ) . . . o D ij ( vi ) ] and Zij is a one-hot vector sampled from the categorical distribution p ( Z | Π ) where the event probabilities Π = { π1 , π2 , . . . , πD | ∑D i=1 πi = 1 } are learnable . Essentially , learning the distribution p ( Z ) allows us to sample computational paths or sub-graphs of the original DAG that correspond to high-performing , compact architecture from the over-parameterized master network . Sampling discrete random variables from p ( Z ) , however , does not result in a gradient amenable to back-propagation . To sidestep this issue , DSNAS adopts the straight-through Gumbel-softmax trick ( Jang et al. , 2016 ) , which re-parameterizes the kth index of the one-hot variable as Zij [ k ] = I ( k , argmax t [ gt + log πt ] ) , where gt ∼ Gumbel ( 0 , 1 ) . While this forward computation does not have a gradient by itself , we can estimate the gradient through a proxy during the backward pass : ∇ Zij [ k ] ' ∇ Z̃ij [ k ] , ∇ ( exp ( ( gk + log πk ) /τ ) ∑D t=1 exp ( ( gt + log πt ) /τ ) ) ( 3 ) which is unbiased when converged as the temperature τ is steadily annealed to 0 ( Jang et al. , 2016 ) . This formulation , however , is not easily extended to the FL setting , especially when local tasks are not homogeneous . The key challenges in doing so are described in Section 3 , together with our proposed approaches . 3 PERSONALIZED NAS FOR FEDERATED LEARNING . 3.1 FEDERATED LEARNING OF DSNAS . Let W denote the concatenated weights of all network operations in the network architecture . The set up above of DSNAS ( Jang et al. , 2016 ) is then naïvely extendable to a FL setting via the following objective formulation : argmin W , Π L ( W , Π ) ≡ argmin W , Π 1 M M∑ i=1 Li ( W , Π | Di ) . ( 4 ) McMahan et al . ( 2017 ) optimizes this objective by alternating between ( a ) central agent broadcasting aggregated weights to local clients and ( b ) local clients sending gradient descent updated weights ( given local data ) to the central agent for aggregation . This , however , implies that after the last 2We drop the cell index in the definition operation for ease of notation . central aggregation step , all clients will follow the same architecture distribution induced by the final broadcasted copy of W and Π . As previously argued , this is not optimal in a heterogenous task setting which requires task-specific adaptation for local clients to achieve good performance . Furthermore , having the same sampling distribution p ( Z ) regardless of context ( i.e. , feature maps received as cell input ) limits the architecture discovery to those that perform reasonably on average over the entire dataset . However , we remark that restricting the architecture to be the same for every input samples is unnecessary and undermines the expressiveness of an over-parameterized search space . On the other hand , letting the architecture be determined on a per-sample basis makes better use of the search space and potentially improves the predictive performance . The focus of this work is therefore to incorporate both task-wise and context-wise personalization to federated neural architecture search in multitask scenarios , which is achieved through our proposed algorithm FEDPNAS . In general , FEDPNAS functions similarly to the vanilla FEDDSNAS algorithm described above , with an addition of a fine-tuning phase at each local client after the FL phase to adapt the aggregated common model for local task data , as shown in Fig . 1 . To make this work , however , we need to address the following key challenges : C1 . First , as previously argued in Section 1 , tasks across federated clients tend to share similarities in broad sense , and diverge in finer details . A good federated personalization search space , therefore , need to capture this fundamental observation through design and appropriate resource distribution . We address this challenge in Section 3.2 . C2 . Second , a major advantage of having an over-parameterized architecture search space is the flexibility of having specific computation paths for different samples , which is not exploited by DSNAS as reflected in its choice of context-independent sampling distribution p ( Z ) . To address this , Section 3.4 proposes a novel parameterization of p ( Z ) to incorporate context information into operator sampling . C3 . Last , while the fine-tuning phase is designed to incorporate task-personalization , there is no guarantee that the common model can be quickly adapted to client tasks ( Fallah et al. , 2020 ) . The common model may end up in a localization that favors one client over another , which makes it difficult for the latter to fine-tune . To address this concern , Section 3.3 proposes a new personalized federated NAS objective inspired by Finn et al . ( 2017 ) to optimize the common model in anticipation of further fine-tuning by the client models . | The paper proposes a personalized neural architecture search technique for federated learning. The paper incorporates both task-personalization and context-personalization. Experimental results on both CIFAR-10 and MNIST datasets demonstrate the promise of the proposed method. | SP:e16c21331906c3dad479600864c3491d89ab67a5 |
Token Pooling in Vision Transformers | Despite the recent success in many applications , the high computational requirements of vision transformers limit their use in resource-constrained settings . While many existing methods improve the quadratic complexity of attention , in most vision transformers , self-attention is not the major computation bottleneck , e.g. , more than 80 % of the computation is spent on fully-connected layers . To improve the computational complexity of all layers , we propose a novel token downsampling method , called Token Pooling , efficiently exploiting redundancies in the images and intermediate token representations . We show that , under mild assumptions , softmax-attention acts as a high-dimensional low-pass ( smoothing ) filter . Thus , its output contains redundancy that can be pruned to achieve a better trade-off between the computational cost and accuracy . Our new technique accurately approximates a set of tokens by minimizing the reconstruction error caused by downsampling . We solve this optimization problem via cost-efficient clustering . We rigorously analyze and compare to prior downsampling methods . Our experiments show that Token Pooling significantly improves the cost-accuracy trade-off over the state-of-the-art downsampling . Token Pooling is simple and effective and can benefit many architectures with global attention . Applied to DeiT , NEW it achieves the same ImageNet top-1 accuracy using 42 % fewer computations . 1 INTRODUCTION . Vision transformers ( Dosovitskiy et al. , 2020 ; Touvron et al. , 2021 ; Liu et al. , 2021 ; Heo et al. , 2021 ; Zheng et al. , 2021 ) have demonstrated state-of-the-art results in many vision applications , from image classification to segmentation . However , the high computational cost limits their use in resource-restricted , real-time , or low-powered applications . While most prior work in Natural Language Processing ( NLP ) improve the time-complexity of attention ( Tay et al. , 2020b ; Ilharco et al. , 2020 ) , in vision transformers the main computation bottleneck is the fully-connected layers , as we show in §3.1 . The computational complexity of these layers is determined by the number of tokens and their feature dimensionality . While reducing the dimensionality improves computational cost , it sacrifices model capacity and often significantly deteriorates the accuracy of the model . On the other hand , since images often contain mostly smooth surfaces with sparsely located edges and corners , they contain similar ( and thus redundant ) features . Moreover , we show that , under mild assumptions , softmax-attention is equivalent to low-pass filtering of tokens and thereby produces tokens with similar features , as empirically observed by Goyal et al . ( 2020 ) and Rao et al . ( 2021 ) . This redundancy in representations suggests that we can reduce the number of tokens , i.e. , downsampling , without a significant impact to the accuracy , achieving a better cost-accuracy trade-off than reducing feature dimensionality alone . Downsampling is widely used in Convolutional Neural Network ( CNN ) architectures to improve computational efficiency , among other purposes . Given a grid of pixels or features , downsampling gradually reduces the grid dimensions via combining neighboring vertices on the grid . The prevailing max/average pooling and sub-sampling are examples of ( spatially uniform ) grid-downsampling that only uses locations on the grid to decide which vertices to combine . Such methods do not efficiently address non-uniformly distributed redundancy in images and features ( Recasens et al. , 2018 ; Marin et al. , 2019 ) . Unlike CNNs that require grid preservation , transformers allow a wider range of nonuniform data-aware downsampling layers , where a better operator can be designed . We propose Token Pooling , a novel nonuniform data-aware downsampling operator for transformers efficiently exploiting redundancy in features . See the illustration and performance metric in Fig- ures 1a & 1b . Motivated by nonuniform sampling and image compression ( Marvasti , 2012 ; Unat et al. , 2009 ; Belfor et al. , 1994 ; Rabbani , 2002 ) , we formulate token downsampling as an optimization problem that minimizes the reconstruction error caused by downsampling . We show that clustering algorithms , K-Means and K-Medoids , efficiently solve this problem , see illustration in Figure 1c . To the best of our knowledge , we are the first to use this formulation and simple clustering analysis for token donwsampling in transformers . We compare with various prior downsampling techniques , including grid-downsampling ( Pan et al. , 2021 ; Liu et al. , 2021 ; Wang et al. , 2021 ) and token pruning ( Goyal et al. , 2020 ; Rao et al. , 2021 ) . The proposed Token Pooling outperforms existing methods and provides the best trade-off between computational cost and classification accuracy . Contributions . The paper makes the following contributions : • We conduct an extensive study of prior downsampling techniques for visual transformers by comparing their computation-accuracy trade-offs . • We analyze the computational cost of vision-transformer components and the limitations of the prior score-based downsampling methods . We also show that attention layers behave like low-pass filtering and thus produce redundant tokens . • Motivated by the redundancy in images and features , we propose a novel token downsampling technique , Token Pooling , for transformers with global attention via error minimization and NEW achieve a significant improvement in the computation-accuracy trade-off . 2 RELATED WORK . In this section , we introduce vision transformers , and review existing methods that improve the efficiency of transformers including existing token-downsampling methods . 2.1 VISION TRANSFORMERS . Vision transformers ( Dosovitskiy et al. , 2020 ; Touvron et al. , 2021 ; Heo et al. , 2021 ; Liu et al. , 2021 ; Pan et al. , 2021 ) utilize the transformer architecture that is originally designed for NLP by Vaswani et al . ( 2017 ) and further popularized by Radford et al . ( 2018 ) and Devlin et al . ( 2019 ) . In a high level , a vision transformer is a composition ofL transformer blocks that take a set of input tokens and return another set of output tokens . In vision , input tokens are features representing individual nonoverlapping image patches . To perform classification , a classification token is inserted to estimate the probabilities of individual classes . To achieve the state-of-the-art , ViT ( Dosovitskiy et al. , 2020 ) used pretraining on JFT-300M , a proprietary dataset much larger than standard ImageNet1k ( Deng et al. , 2009 ) . Recently , DeiT ( Touvron et al. , 2021 ) achieved state-of-the-art results with advanced training on ImageNet1k only . Let the set of tokens at depth l be F l = { f l0 , . . . , f lN } where f li ∈ RM is the feature values of the i-th token . A typical transformer block φ at depth l processes F l by a Multi-head Self-Attention ( MSA ) layer and a point-wise Multi-Layer Perceptron ( MLP ) . Let matrix F ∈ RN×M be a row-wise concatenation of tokens F l. Then1 , φ ( F ) = MLP ( MSA ( F ) ) such that ( 1 ) MSA ( F ) = [ O1 , O2 , . . . , OH ] W O ( 2 ) where H is the number of heads , matrix WO ∈ RM×M is a learnable parameter of the block , [ , ] is column-wise concatenation and Oh ∈ RN×d is the output of h-th attention head for d =M/H : Oh = AhVh such that Ah = softmax ( QhK > h /√ d ) ∈ RN×N . ( 3 ) Keys Kh , queries Qh and values Vh are linear projections of the input tokens ( QKV projections ) : Qh = FW Q h , Kh = FW K h , Vh = FW V h ( 4 ) where WQh ∈ RM×d , WKh ∈ RM×d , W Vh ∈ RM×d are learnable linear transformations . Note , the number of tokens is not affected by the transformer blocks , i.e. , |F l+1| = |F l| . 2.2 EFFICIENT TRANSFORMERS . Similar to many machine learning models , the efficiency of transformers can be improved via metaparameter search ( Howard et al. , 2017 ; Tan & Le , 2019 ) , automated neural architecture search ( Elsken et al. , 2019 ; Tan et al. , 2019 ; Wu et al. , 2019 ) , manipulating the input size and resolution of feature maps ( Paszke et al. , 2016 ; Howard et al. , 2017 ; Zhao et al. , 2018 ) , pruning ( LeCun et al. , 1990 ) , quantization ( Jacob et al. , 2018 ) , and sparsification ( Gale et al. , 2019 ) , etc . For example , Dosovitskiy et al . ( 2020 ) and Touvron et al . ( 2021 ) obtain a family of ViT and DeiT models , respectively , by varying the input resolution , the number of heads H , and the feature dimensionality M . Each of the models operates with a different computational requirement and accuracy . In the following , we review techniques developed for transformers . 2.2.1 EFFICIENT SELF-ATTENTION . The softmax-attention layer ( 3 ) has a quadratic time complexity w.r.t . the number of tokens , i.e. , O ( N2 ) . In many NLP applications where every token represents a word or a character , N can be large , making attention a computation bottleneck ( Dai et al. , 2019 ; Rae et al. , 2019 ) . While many works improve the time complexity of attention layers , as we will see in §3.1 , they are not the bottleneck in most current vision transformers . The time complexity of an attention layer can be reduced by restricting the attention field-of-view and thus imposing sparsity on Ah in ( 3 ) . This can be achieved using the spatial relationship between tokens in the image/text domain ( Parmar et al. , 2018 ; Ramachandran et al. , 2019 ; Qiu et al. , 2020 ; Beltagy et al. , 2020 ; Child et al. , 2019 ; Zaheer et al. , 2020 ) or based on token values using localitysensitive hashing , sorting , compression , etc . ( Kitaev et al. , 2020 ; Vyas et al. , 2020 ; Tay et al. , 2020a ; Liu et al. , 2018 ; Wang et al. , 2020 ; Tay et al. , 2021 ) . Prior works have also proposed attention mechanisms with lower time complexity , e.g. , O ( N ) or O ( N logN ) ( Katharopoulos et al. , 2020 ; Peng et al. , 2021 ; Choromanski et al. , 2021 ; Tay et al. , 2021 ) . Roy et al . ( 2021 ) cluster queries and NEW keys to sparsify attention matrices to speed-up the attention , but they do not downsample the tokens . Note that the goal of these methods is to reduce the time complexity of the attention layer—the number of tokens remains the same across the transformer blocks . In contrast , our method reduces the number of tokens after attention has been computed . Thereby , we can utilize these methods to further improve the overall efficiency of transformers . Recently , Wu et al . ( 2020 ) proposed a new attention-based layer that learns a small number of query vectors to extract information from the input feature map . Similarly , Wu et al . ( 2021 ) replace selfattention with a new recurrent layer that outputs a smaller number of tokens . In comparison , our method directly minimizes the token reconstruction error due to token downsampling . Also , our layer has no learnable parameters and can be easily incorporated into existing vision transformers . | In this paper, the authors propose a novel token-pooling method to reduce redundancies in tokens for recent vision transformers. They analyze the computation cost distribution of vision transformers and the limitations of grid-based & score-based token downsampling methods. They further formulate a reconstruction loss and optimize it with the token pooling layer. The experimental results based on DeiT show that token pooling improves accuracy while reducing computation cost by a large margin. This idea can be applied to other vision transformers as well. | SP:4015789338b07b5dab3e4eaf542d5b0b7aaba267 |
Token Pooling in Vision Transformers | Despite the recent success in many applications , the high computational requirements of vision transformers limit their use in resource-constrained settings . While many existing methods improve the quadratic complexity of attention , in most vision transformers , self-attention is not the major computation bottleneck , e.g. , more than 80 % of the computation is spent on fully-connected layers . To improve the computational complexity of all layers , we propose a novel token downsampling method , called Token Pooling , efficiently exploiting redundancies in the images and intermediate token representations . We show that , under mild assumptions , softmax-attention acts as a high-dimensional low-pass ( smoothing ) filter . Thus , its output contains redundancy that can be pruned to achieve a better trade-off between the computational cost and accuracy . Our new technique accurately approximates a set of tokens by minimizing the reconstruction error caused by downsampling . We solve this optimization problem via cost-efficient clustering . We rigorously analyze and compare to prior downsampling methods . Our experiments show that Token Pooling significantly improves the cost-accuracy trade-off over the state-of-the-art downsampling . Token Pooling is simple and effective and can benefit many architectures with global attention . Applied to DeiT , NEW it achieves the same ImageNet top-1 accuracy using 42 % fewer computations . 1 INTRODUCTION . Vision transformers ( Dosovitskiy et al. , 2020 ; Touvron et al. , 2021 ; Liu et al. , 2021 ; Heo et al. , 2021 ; Zheng et al. , 2021 ) have demonstrated state-of-the-art results in many vision applications , from image classification to segmentation . However , the high computational cost limits their use in resource-restricted , real-time , or low-powered applications . While most prior work in Natural Language Processing ( NLP ) improve the time-complexity of attention ( Tay et al. , 2020b ; Ilharco et al. , 2020 ) , in vision transformers the main computation bottleneck is the fully-connected layers , as we show in §3.1 . The computational complexity of these layers is determined by the number of tokens and their feature dimensionality . While reducing the dimensionality improves computational cost , it sacrifices model capacity and often significantly deteriorates the accuracy of the model . On the other hand , since images often contain mostly smooth surfaces with sparsely located edges and corners , they contain similar ( and thus redundant ) features . Moreover , we show that , under mild assumptions , softmax-attention is equivalent to low-pass filtering of tokens and thereby produces tokens with similar features , as empirically observed by Goyal et al . ( 2020 ) and Rao et al . ( 2021 ) . This redundancy in representations suggests that we can reduce the number of tokens , i.e. , downsampling , without a significant impact to the accuracy , achieving a better cost-accuracy trade-off than reducing feature dimensionality alone . Downsampling is widely used in Convolutional Neural Network ( CNN ) architectures to improve computational efficiency , among other purposes . Given a grid of pixels or features , downsampling gradually reduces the grid dimensions via combining neighboring vertices on the grid . The prevailing max/average pooling and sub-sampling are examples of ( spatially uniform ) grid-downsampling that only uses locations on the grid to decide which vertices to combine . Such methods do not efficiently address non-uniformly distributed redundancy in images and features ( Recasens et al. , 2018 ; Marin et al. , 2019 ) . Unlike CNNs that require grid preservation , transformers allow a wider range of nonuniform data-aware downsampling layers , where a better operator can be designed . We propose Token Pooling , a novel nonuniform data-aware downsampling operator for transformers efficiently exploiting redundancy in features . See the illustration and performance metric in Fig- ures 1a & 1b . Motivated by nonuniform sampling and image compression ( Marvasti , 2012 ; Unat et al. , 2009 ; Belfor et al. , 1994 ; Rabbani , 2002 ) , we formulate token downsampling as an optimization problem that minimizes the reconstruction error caused by downsampling . We show that clustering algorithms , K-Means and K-Medoids , efficiently solve this problem , see illustration in Figure 1c . To the best of our knowledge , we are the first to use this formulation and simple clustering analysis for token donwsampling in transformers . We compare with various prior downsampling techniques , including grid-downsampling ( Pan et al. , 2021 ; Liu et al. , 2021 ; Wang et al. , 2021 ) and token pruning ( Goyal et al. , 2020 ; Rao et al. , 2021 ) . The proposed Token Pooling outperforms existing methods and provides the best trade-off between computational cost and classification accuracy . Contributions . The paper makes the following contributions : • We conduct an extensive study of prior downsampling techniques for visual transformers by comparing their computation-accuracy trade-offs . • We analyze the computational cost of vision-transformer components and the limitations of the prior score-based downsampling methods . We also show that attention layers behave like low-pass filtering and thus produce redundant tokens . • Motivated by the redundancy in images and features , we propose a novel token downsampling technique , Token Pooling , for transformers with global attention via error minimization and NEW achieve a significant improvement in the computation-accuracy trade-off . 2 RELATED WORK . In this section , we introduce vision transformers , and review existing methods that improve the efficiency of transformers including existing token-downsampling methods . 2.1 VISION TRANSFORMERS . Vision transformers ( Dosovitskiy et al. , 2020 ; Touvron et al. , 2021 ; Heo et al. , 2021 ; Liu et al. , 2021 ; Pan et al. , 2021 ) utilize the transformer architecture that is originally designed for NLP by Vaswani et al . ( 2017 ) and further popularized by Radford et al . ( 2018 ) and Devlin et al . ( 2019 ) . In a high level , a vision transformer is a composition ofL transformer blocks that take a set of input tokens and return another set of output tokens . In vision , input tokens are features representing individual nonoverlapping image patches . To perform classification , a classification token is inserted to estimate the probabilities of individual classes . To achieve the state-of-the-art , ViT ( Dosovitskiy et al. , 2020 ) used pretraining on JFT-300M , a proprietary dataset much larger than standard ImageNet1k ( Deng et al. , 2009 ) . Recently , DeiT ( Touvron et al. , 2021 ) achieved state-of-the-art results with advanced training on ImageNet1k only . Let the set of tokens at depth l be F l = { f l0 , . . . , f lN } where f li ∈ RM is the feature values of the i-th token . A typical transformer block φ at depth l processes F l by a Multi-head Self-Attention ( MSA ) layer and a point-wise Multi-Layer Perceptron ( MLP ) . Let matrix F ∈ RN×M be a row-wise concatenation of tokens F l. Then1 , φ ( F ) = MLP ( MSA ( F ) ) such that ( 1 ) MSA ( F ) = [ O1 , O2 , . . . , OH ] W O ( 2 ) where H is the number of heads , matrix WO ∈ RM×M is a learnable parameter of the block , [ , ] is column-wise concatenation and Oh ∈ RN×d is the output of h-th attention head for d =M/H : Oh = AhVh such that Ah = softmax ( QhK > h /√ d ) ∈ RN×N . ( 3 ) Keys Kh , queries Qh and values Vh are linear projections of the input tokens ( QKV projections ) : Qh = FW Q h , Kh = FW K h , Vh = FW V h ( 4 ) where WQh ∈ RM×d , WKh ∈ RM×d , W Vh ∈ RM×d are learnable linear transformations . Note , the number of tokens is not affected by the transformer blocks , i.e. , |F l+1| = |F l| . 2.2 EFFICIENT TRANSFORMERS . Similar to many machine learning models , the efficiency of transformers can be improved via metaparameter search ( Howard et al. , 2017 ; Tan & Le , 2019 ) , automated neural architecture search ( Elsken et al. , 2019 ; Tan et al. , 2019 ; Wu et al. , 2019 ) , manipulating the input size and resolution of feature maps ( Paszke et al. , 2016 ; Howard et al. , 2017 ; Zhao et al. , 2018 ) , pruning ( LeCun et al. , 1990 ) , quantization ( Jacob et al. , 2018 ) , and sparsification ( Gale et al. , 2019 ) , etc . For example , Dosovitskiy et al . ( 2020 ) and Touvron et al . ( 2021 ) obtain a family of ViT and DeiT models , respectively , by varying the input resolution , the number of heads H , and the feature dimensionality M . Each of the models operates with a different computational requirement and accuracy . In the following , we review techniques developed for transformers . 2.2.1 EFFICIENT SELF-ATTENTION . The softmax-attention layer ( 3 ) has a quadratic time complexity w.r.t . the number of tokens , i.e. , O ( N2 ) . In many NLP applications where every token represents a word or a character , N can be large , making attention a computation bottleneck ( Dai et al. , 2019 ; Rae et al. , 2019 ) . While many works improve the time complexity of attention layers , as we will see in §3.1 , they are not the bottleneck in most current vision transformers . The time complexity of an attention layer can be reduced by restricting the attention field-of-view and thus imposing sparsity on Ah in ( 3 ) . This can be achieved using the spatial relationship between tokens in the image/text domain ( Parmar et al. , 2018 ; Ramachandran et al. , 2019 ; Qiu et al. , 2020 ; Beltagy et al. , 2020 ; Child et al. , 2019 ; Zaheer et al. , 2020 ) or based on token values using localitysensitive hashing , sorting , compression , etc . ( Kitaev et al. , 2020 ; Vyas et al. , 2020 ; Tay et al. , 2020a ; Liu et al. , 2018 ; Wang et al. , 2020 ; Tay et al. , 2021 ) . Prior works have also proposed attention mechanisms with lower time complexity , e.g. , O ( N ) or O ( N logN ) ( Katharopoulos et al. , 2020 ; Peng et al. , 2021 ; Choromanski et al. , 2021 ; Tay et al. , 2021 ) . Roy et al . ( 2021 ) cluster queries and NEW keys to sparsify attention matrices to speed-up the attention , but they do not downsample the tokens . Note that the goal of these methods is to reduce the time complexity of the attention layer—the number of tokens remains the same across the transformer blocks . In contrast , our method reduces the number of tokens after attention has been computed . Thereby , we can utilize these methods to further improve the overall efficiency of transformers . Recently , Wu et al . ( 2020 ) proposed a new attention-based layer that learns a small number of query vectors to extract information from the input feature map . Similarly , Wu et al . ( 2021 ) replace selfattention with a new recurrent layer that outputs a smaller number of tokens . In comparison , our method directly minimizes the token reconstruction error due to token downsampling . Also , our layer has no learnable parameters and can be easily incorporated into existing vision transformers . | This paper proposes a new token downsampling method for vision transformer, called Token Pooling, to prune redundant tokens efficiently, so as to achieve a better flop-accuracy trade-off. Specifically, token pooling is a nonuniform data-aware downsampling method, which uses cluster algorithms to aggregate information from tokens automatically. To keep important information, the authors also proposed minimizing the reconstruction loss caused by downsampling. The authors performed experiments on the ImageNet-1k dataset, showing that the proposed token pooling can significantly improve the flop-accuracy trade-off over the existing downsampling methods. | SP:4015789338b07b5dab3e4eaf542d5b0b7aaba267 |
Improving Fairness via Federated Learning | 1 INTRODUCTION . As machine learning is now used to make critical decisions that affect human life , culture , and rights , fair learning has recently received increasing attention . Various fairness notions have been introduced in the past few years ( Dwork et al. , 2012 ; Hardt et al. , 2016 ; Zafar et al. , 2017b ; a ; Kearns et al. , 2018 ; Friedler et al. , 2016 ) . Among various fairness notions , group fairness is the most studied one ( Hardt et al. , 2016 ; Zafar et al. , 2017a ) . Group fairness requires the classifier to treat different groups similarly , where groups are defined with respect to sensitive attributes such as gender and race . One of the most commonly used group fairness notions is demographic parity , which requires that different groups are equally likely to receive desirable outcomes . There has been a large amount of work in training fair classifiers ( Zafar et al. , 2017c ; Hardt et al. , 2016 ; Roh et al. , 2021 ) , and almost all of these studies assume that the learner has access to the entire training data . Unfortunately , this is not the case in many critical applications . To see this , consider a scenario where multiple data owners ( e.g . courts or financial institutions ) have their own private data . Even if they are willing to coordinate with the other institutions to obtain a single model that works well on the combined data , they can not directly share their data with the others due to the privacy act . This precisely sets the core question we aim to answer in this paper – how can we privately train a fair classifier on decentralized data ? To answer this , we first study three existing approaches : Unfederated Fair Learning ( UFL ) , Federated Fair Learning via FEDAVG ( FFL via FEDAVG ) , and Centralized Fair Learning ( CFL ) . See Fig . 1 for illustration . Unfederated Fair Learning ( UFL ) and Centralized Fair Learning ( CFL ) UFL is the most naı̈ve yet most private approach . As the name indicates , this strategy refers to a scenario where multiple data owners simply decide to not coordinate . Instead , each of them learns a fair model on its local data to serve its own users . This approach is completely private as the participating data owners share nothing with the others . However , the overall performance of UFL is expected to be poor , because each data owner may have a highly biased view of the entire data distribution , making their locally trained classifiers fair only on a biased subset of the data , but not on the entire data . To evaluate the performance of this approach , we consider the randomized classifier that makes a prediction using a randomly chosen local classifier . Note that this can be viewed as a random customer model , i.e. , a user drawn from the overall data distribution picks and visits one of the institutions , uniformly at random . Another extreme approach is CFL , where a fair model is trained on the pooled data . We expect CFL to achieve the best performance tradeoff , at the cost of no privacy . Federated Fair Learning via FEDAVG ( FFL via FEDAVG ) FFL via FEDAVG applies federated learning ( Konečnỳ et al. , 2017 ) together with existing fair learning algorithms . Federated learning is a distributed learning framework , using which many data owners can collaboratively train a model under the orchestration of a central server while keeping their data decentralized . For instance , under FEDAVG , the standard aggregation protocol for federated learning , the central server periodically computes a weighted average of the locally trained model parameters . If each data owner runs a fair learning algorithm on its own data and these locally trained models are aggregated via FEDAVG , then one might hope to obtain a model that is accurate and fair on the overall data distribution . We call this approach Federated Fair Learning via FEDAVG ( FFL via FEDAVG ) . Goal and Main Contributions The performances of these approaches have not been rigorously analyzed in the literature . In the first place , it has been unknown whether there is any strict performance gap between UFL and FFL via FEDAVG . This makes it unclear whether or not federated learning is necessary at all for decentralized fair learning . The performance comparison between FFL via FEDAVG and CFL also remains unclear . Can FFL via FEDAVG always match the performance of CFL ? If not , can we develop a better federated learning approach for decentralized fair learning ? Inspired by these open questions , this work rigorously analyzes the performance of the existing approaches and proposes a new solution to decentralized fair learning . Our major contributions can be summarized as follows : • We develop a theoretical framework for analyzing various approaches for decentralized fair learning . Using this , we prove the strict ordering between the existing approaches , i.e. , under some mild conditions , UFL < FFL via FEDAVG < CFL , w.r.t . their fairness-accuracy tradeoffs . • Improving upon the state-of-the-art algorithm for ( centralized ) fair learning ( Roh et al. , 2021 ) , we design FEDFB , a novel approach to learning fair classifiers via federated learning . • Via extensive experiments , we show that ( 1 ) our theoretical findings hold under more general settings , and ( 2 ) FEDFB significantly outperforms the existing approaches on various datasets and achieves similar performance as CFL . To the best of our knowledge , our work is the first theoretical performance comparison of various approaches to fair learning on decentralized data . Moreover , it characterizes the necessity of federated learning for improved fairness-accuracy tradeoff , and we expect this to expedite the adoption of federated learning-based approaches . Our proposed solution FEDFB achieves state-of-the-art performance on many datasets , sometimes achieving a similar tradeoff as the one trained on centralized data . 2 RELATED WORK . Model Fairness Among various algorithms for fair training ( Zemel et al. , 2013 ; Jiang & Nachum , 2020 ; Zafar et al. , 2017c ; a ; Hardt et al. , 2016 ; Roh et al. , 2021 ; 2020 ) , the current state-of-the-art is FairBatch ( Roh et al. , 2021 ) , which reweights the samples by solving a bi-level optimization problem , whose inner optimizer is the standard training algorithm and outer optimizer aims to find the best weights attached to groups of samples for the sake of model fairness . Federated Learning Unlike traditional , centralized machine learning approaches , federated learning keeps the data decentralized throughout training , reducing the privacy risks involved in traditional approaches ( Konečnỳ et al. , 2017 ; McMahan et al. , 2017 ) . FEDAVG ( McMahan et al. , 2017 ) is the first and most widely used federated learning algorithm . The idea is to iteratively compute a weighted average of the local model parameters , with the weights proportional to the local datasets ’ sizes . Prior work ( Li et al. , 2020b ) has shown that FEDAVG provably converges under some mild conditions . The design of our proposed algorithm FEDFB is also based on that of FEDAVG . Federated Fair Learning for Client Parity There have been only a few attempts in achieving fairness under the federated setting . Moreover , the definition of “ fairness ” used in the existing federated learning work is slightly different from the standard notion used in the centralized setting . One popular definition of fairness in the federated setting is that all clients ( i.e . data owners ) achieve similar accuracies ( or loss values ) , which we call client parity , and several algorithms have been proposed to achieve this goal ( Li et al. , 2021 ; 2020a ; Mohri et al. , 2019 ; Yue et al. , 2021 ; Zhang et al. , 2020a ) . To compare our methods with existing federated fair learning algorithms designed for client parity , we also extend our FEDFB such that it can also achieve client parity instead of the standard notion of group fairness . In Sec . 5 , we will show that FEDFB can achieve as good client parity as the existing algorithms , though FEDFB is not specifically designed for client parity . Federated Fair Learning for Group Fairness A few very recent studies ( Ezzeldin et al. , 2021 ; Rodrı́guez-Gálvez et al. , 2021 ; Chu et al. , 2021 ; Du et al. , 2021 ; Cui et al. , 2021 ) , conducted concurrently with our work , also aim at achieving group fairness under federated learning . In particular , Du et al . ( 2021 ) , Rodrı́guez-Gálvez et al . ( 2021 ) and Chu et al . ( 2021 ) mimic the centralized fair learning setting by exchanging information for each local update . In contrast , our FEDFB requires much fewer communication rounds , ensuring higher privacy and lower communication costs . Simlar to FEDFB , Ezzeldin et al . ( 2021 ) employs FEDAVG and a reweighting mechanism to achieve group fairness . However , FAIRFED only applies to the case with one single binary sensitive attribute , while Rodrı́guez-Gálvez et al . ( 2021 ) and Chu et al . ( 2021 ) are not applicable to demographic parity . Therefore , we summarize the comparison of UFL , FFL via FEDAVG , CFL , FEDFB and AGNOSTICFAIR ( Du et al. , 2021 ) in terms of performance and privacy in Fig . 2 . There is also work that aims at achieving local fairness for each data owner ( Cui et al. , 2021 ) . This is in contrast to our work , which instead focuses on achieving global fairness in the overall data distribution . Our setting is more appropriate in domains such as criminal justice and social welfare . 3 PERFORMANCE ANALYSIS OF UFL , FFL VIA FEDAVG AND CFL . In Sec . 3.1 , we first show the necessity of federation by proving that FFL via FEDAVG can achieve strictly higher fairness than UFL . We then prove the limitation of FFL via FEDAVG by comparing its performance with an oracle bound of CFL in Sec . 3.2 . These two results together imply that federated learning is necessary , but there exists a limit on what can be achieved by FEDAVG-based approaches . We will present informal theoretical statements , deferring the formal versions and proofs to Sec . A . Problem Setting Denote [ N ] : = { 0 , 1 , . . . , N − 1 } for any N ∈ Z+ . We assume I clients , which have the same amount of data . We further assume a simple binary classification setting with a binary sensitive attribute , i.e. , x ∈ X = R is the input feature , y ∈ Y = [ 1 ] is the outcome and a ∈ A = [ A ] = [ 1 ] is the binary sensitive attribute . Assume x is a continuous variable . The algorithm we will develop later in Sec . 4 will be applicable to general settings . We now introduce parameters for describing the data distribution . Let y | x ∼ Bern ( η ( x ) ) for all client i , where η ( · ) : X → [ 0 , 1 ] is a strictly monotone increasing function . Assume x | a = a , i = i ∼ P ( i ) a , a | i = i ∼ Bern ( qi ) , where i is the index of the client , P ( i ) a is a distribution , and qi ∈ [ 0 , 1 ] for a = 0 , 1 , i ∈ [ I ] . Let F = { f : X × A → [ 0 , 1 ] } . Given f ∈ F and data sample ( x , a ) , we consider the following randomized classifier : ŷ | x , a ∼ Bern ( f ( x , a ) ) . Using these definitions , we now define demographic parity , specialized for a binary sensitive attribute : Definition 1 ( Demographic Parity ( binary cases ) ) . P ( ŷ = 1 | a = 0 ) = P ( ŷ = 1 | a = 1 ) . To measure how unfair a classifier is with respect to demographic parity ( DP ) , we measure DP disparity , i.e . the absolute difference between the two positive prediction rates : DP Disp ( f ) = |P ( ŷ = 1 | a = 0 ) − P ( ŷ = 1 | a = 1 ) | . | This paper investigates how one can achieve group fairness under a decentralized setting. The authors develop a theoretical framework for decentralized fair learning algorithms and analyzed the performance of existing approaches including UFL, FFL via FedAvg, and CFL. They provide novel insights showing that UFL<FFL via FedAvg<CFL. They also propose a new federated fair learning algorithm FEDFB by letting each client share extra information about the unfairness of its local classifier with the server, which then computes the optimal samples weights for the following round of local training. The experimental results demonstrate that FEDFB achieves state-of-the-art performance, while still ensuring data privacy. | SP:f234ce685370ba94d828ba354869e6038b643283 |
Improving Fairness via Federated Learning | 1 INTRODUCTION . As machine learning is now used to make critical decisions that affect human life , culture , and rights , fair learning has recently received increasing attention . Various fairness notions have been introduced in the past few years ( Dwork et al. , 2012 ; Hardt et al. , 2016 ; Zafar et al. , 2017b ; a ; Kearns et al. , 2018 ; Friedler et al. , 2016 ) . Among various fairness notions , group fairness is the most studied one ( Hardt et al. , 2016 ; Zafar et al. , 2017a ) . Group fairness requires the classifier to treat different groups similarly , where groups are defined with respect to sensitive attributes such as gender and race . One of the most commonly used group fairness notions is demographic parity , which requires that different groups are equally likely to receive desirable outcomes . There has been a large amount of work in training fair classifiers ( Zafar et al. , 2017c ; Hardt et al. , 2016 ; Roh et al. , 2021 ) , and almost all of these studies assume that the learner has access to the entire training data . Unfortunately , this is not the case in many critical applications . To see this , consider a scenario where multiple data owners ( e.g . courts or financial institutions ) have their own private data . Even if they are willing to coordinate with the other institutions to obtain a single model that works well on the combined data , they can not directly share their data with the others due to the privacy act . This precisely sets the core question we aim to answer in this paper – how can we privately train a fair classifier on decentralized data ? To answer this , we first study three existing approaches : Unfederated Fair Learning ( UFL ) , Federated Fair Learning via FEDAVG ( FFL via FEDAVG ) , and Centralized Fair Learning ( CFL ) . See Fig . 1 for illustration . Unfederated Fair Learning ( UFL ) and Centralized Fair Learning ( CFL ) UFL is the most naı̈ve yet most private approach . As the name indicates , this strategy refers to a scenario where multiple data owners simply decide to not coordinate . Instead , each of them learns a fair model on its local data to serve its own users . This approach is completely private as the participating data owners share nothing with the others . However , the overall performance of UFL is expected to be poor , because each data owner may have a highly biased view of the entire data distribution , making their locally trained classifiers fair only on a biased subset of the data , but not on the entire data . To evaluate the performance of this approach , we consider the randomized classifier that makes a prediction using a randomly chosen local classifier . Note that this can be viewed as a random customer model , i.e. , a user drawn from the overall data distribution picks and visits one of the institutions , uniformly at random . Another extreme approach is CFL , where a fair model is trained on the pooled data . We expect CFL to achieve the best performance tradeoff , at the cost of no privacy . Federated Fair Learning via FEDAVG ( FFL via FEDAVG ) FFL via FEDAVG applies federated learning ( Konečnỳ et al. , 2017 ) together with existing fair learning algorithms . Federated learning is a distributed learning framework , using which many data owners can collaboratively train a model under the orchestration of a central server while keeping their data decentralized . For instance , under FEDAVG , the standard aggregation protocol for federated learning , the central server periodically computes a weighted average of the locally trained model parameters . If each data owner runs a fair learning algorithm on its own data and these locally trained models are aggregated via FEDAVG , then one might hope to obtain a model that is accurate and fair on the overall data distribution . We call this approach Federated Fair Learning via FEDAVG ( FFL via FEDAVG ) . Goal and Main Contributions The performances of these approaches have not been rigorously analyzed in the literature . In the first place , it has been unknown whether there is any strict performance gap between UFL and FFL via FEDAVG . This makes it unclear whether or not federated learning is necessary at all for decentralized fair learning . The performance comparison between FFL via FEDAVG and CFL also remains unclear . Can FFL via FEDAVG always match the performance of CFL ? If not , can we develop a better federated learning approach for decentralized fair learning ? Inspired by these open questions , this work rigorously analyzes the performance of the existing approaches and proposes a new solution to decentralized fair learning . Our major contributions can be summarized as follows : • We develop a theoretical framework for analyzing various approaches for decentralized fair learning . Using this , we prove the strict ordering between the existing approaches , i.e. , under some mild conditions , UFL < FFL via FEDAVG < CFL , w.r.t . their fairness-accuracy tradeoffs . • Improving upon the state-of-the-art algorithm for ( centralized ) fair learning ( Roh et al. , 2021 ) , we design FEDFB , a novel approach to learning fair classifiers via federated learning . • Via extensive experiments , we show that ( 1 ) our theoretical findings hold under more general settings , and ( 2 ) FEDFB significantly outperforms the existing approaches on various datasets and achieves similar performance as CFL . To the best of our knowledge , our work is the first theoretical performance comparison of various approaches to fair learning on decentralized data . Moreover , it characterizes the necessity of federated learning for improved fairness-accuracy tradeoff , and we expect this to expedite the adoption of federated learning-based approaches . Our proposed solution FEDFB achieves state-of-the-art performance on many datasets , sometimes achieving a similar tradeoff as the one trained on centralized data . 2 RELATED WORK . Model Fairness Among various algorithms for fair training ( Zemel et al. , 2013 ; Jiang & Nachum , 2020 ; Zafar et al. , 2017c ; a ; Hardt et al. , 2016 ; Roh et al. , 2021 ; 2020 ) , the current state-of-the-art is FairBatch ( Roh et al. , 2021 ) , which reweights the samples by solving a bi-level optimization problem , whose inner optimizer is the standard training algorithm and outer optimizer aims to find the best weights attached to groups of samples for the sake of model fairness . Federated Learning Unlike traditional , centralized machine learning approaches , federated learning keeps the data decentralized throughout training , reducing the privacy risks involved in traditional approaches ( Konečnỳ et al. , 2017 ; McMahan et al. , 2017 ) . FEDAVG ( McMahan et al. , 2017 ) is the first and most widely used federated learning algorithm . The idea is to iteratively compute a weighted average of the local model parameters , with the weights proportional to the local datasets ’ sizes . Prior work ( Li et al. , 2020b ) has shown that FEDAVG provably converges under some mild conditions . The design of our proposed algorithm FEDFB is also based on that of FEDAVG . Federated Fair Learning for Client Parity There have been only a few attempts in achieving fairness under the federated setting . Moreover , the definition of “ fairness ” used in the existing federated learning work is slightly different from the standard notion used in the centralized setting . One popular definition of fairness in the federated setting is that all clients ( i.e . data owners ) achieve similar accuracies ( or loss values ) , which we call client parity , and several algorithms have been proposed to achieve this goal ( Li et al. , 2021 ; 2020a ; Mohri et al. , 2019 ; Yue et al. , 2021 ; Zhang et al. , 2020a ) . To compare our methods with existing federated fair learning algorithms designed for client parity , we also extend our FEDFB such that it can also achieve client parity instead of the standard notion of group fairness . In Sec . 5 , we will show that FEDFB can achieve as good client parity as the existing algorithms , though FEDFB is not specifically designed for client parity . Federated Fair Learning for Group Fairness A few very recent studies ( Ezzeldin et al. , 2021 ; Rodrı́guez-Gálvez et al. , 2021 ; Chu et al. , 2021 ; Du et al. , 2021 ; Cui et al. , 2021 ) , conducted concurrently with our work , also aim at achieving group fairness under federated learning . In particular , Du et al . ( 2021 ) , Rodrı́guez-Gálvez et al . ( 2021 ) and Chu et al . ( 2021 ) mimic the centralized fair learning setting by exchanging information for each local update . In contrast , our FEDFB requires much fewer communication rounds , ensuring higher privacy and lower communication costs . Simlar to FEDFB , Ezzeldin et al . ( 2021 ) employs FEDAVG and a reweighting mechanism to achieve group fairness . However , FAIRFED only applies to the case with one single binary sensitive attribute , while Rodrı́guez-Gálvez et al . ( 2021 ) and Chu et al . ( 2021 ) are not applicable to demographic parity . Therefore , we summarize the comparison of UFL , FFL via FEDAVG , CFL , FEDFB and AGNOSTICFAIR ( Du et al. , 2021 ) in terms of performance and privacy in Fig . 2 . There is also work that aims at achieving local fairness for each data owner ( Cui et al. , 2021 ) . This is in contrast to our work , which instead focuses on achieving global fairness in the overall data distribution . Our setting is more appropriate in domains such as criminal justice and social welfare . 3 PERFORMANCE ANALYSIS OF UFL , FFL VIA FEDAVG AND CFL . In Sec . 3.1 , we first show the necessity of federation by proving that FFL via FEDAVG can achieve strictly higher fairness than UFL . We then prove the limitation of FFL via FEDAVG by comparing its performance with an oracle bound of CFL in Sec . 3.2 . These two results together imply that federated learning is necessary , but there exists a limit on what can be achieved by FEDAVG-based approaches . We will present informal theoretical statements , deferring the formal versions and proofs to Sec . A . Problem Setting Denote [ N ] : = { 0 , 1 , . . . , N − 1 } for any N ∈ Z+ . We assume I clients , which have the same amount of data . We further assume a simple binary classification setting with a binary sensitive attribute , i.e. , x ∈ X = R is the input feature , y ∈ Y = [ 1 ] is the outcome and a ∈ A = [ A ] = [ 1 ] is the binary sensitive attribute . Assume x is a continuous variable . The algorithm we will develop later in Sec . 4 will be applicable to general settings . We now introduce parameters for describing the data distribution . Let y | x ∼ Bern ( η ( x ) ) for all client i , where η ( · ) : X → [ 0 , 1 ] is a strictly monotone increasing function . Assume x | a = a , i = i ∼ P ( i ) a , a | i = i ∼ Bern ( qi ) , where i is the index of the client , P ( i ) a is a distribution , and qi ∈ [ 0 , 1 ] for a = 0 , 1 , i ∈ [ I ] . Let F = { f : X × A → [ 0 , 1 ] } . Given f ∈ F and data sample ( x , a ) , we consider the following randomized classifier : ŷ | x , a ∼ Bern ( f ( x , a ) ) . Using these definitions , we now define demographic parity , specialized for a binary sensitive attribute : Definition 1 ( Demographic Parity ( binary cases ) ) . P ( ŷ = 1 | a = 0 ) = P ( ŷ = 1 | a = 1 ) . To measure how unfair a classifier is with respect to demographic parity ( DP ) , we measure DP disparity , i.e . the absolute difference between the two positive prediction rates : DP Disp ( f ) = |P ( ŷ = 1 | a = 0 ) − P ( ŷ = 1 | a = 1 ) | . | The authors propose a novel algorithm to train statistical models respecting fairness criteria like client parity or demographic parity. This problem has been investigated in the literature for the centralized setting and the authors propose to extend FairBatch (FB) to the federated setting. Finally, the authors show experimentally that their algorithm FedFB provides better fairness than when a server centralizes all the clients data. This work is quite novel but its theoretical statements solely rely on very specific use cases and lack clarity. | SP:f234ce685370ba94d828ba354869e6038b643283 |
L-SR1 Adaptive Regularization by Cubics for Deep Learning | 1 INTRODUCTION . Most deep learning problems involve minimization of the empirical risk of estimation min Θ f ( x ; Θ ) , ( 1 ) where Θ ∈ Rn is the set of weights and f is some scalar-valued loss function . To solve ( 1 ) , various optimization approaches have been implemented , which we describe below . Throughout this paper , we write f ( Θ ) and f ( x ; Θ ) interchangeably . Gradient and adaptive gradient methods are widely used for training deep neural networks ( DNN ) for their computational efficiency . The most common approach is Stochastic Gradient Descent ( SGD ) which , despite its simplicity , performs well over a wide range of applications . However , in a sparse training data setting , SGD performs poorly due to limited training speed ( Luo et al . ( 2019 ) ) . To address this problem , adaptive methods such as AdaGrad ( Duchi et al . ( 2011 ) ) , AdaDelta ( Zeiler ( 2012 ) ) , RMSProp ( Hinton et al . ( 2012 ) ) and Adam ( Kingma & Ba ( 2014 ) ) have been proposed . These methods take the root mean square of the past gradients to influence the current step . Amongst all of these adaptive methods , Adam is arguably the most widely used in a deep learning setting due to it rapid training speed . Newton ’ s method has the potential to exploit curvature information from the second-order derivative ( Hessian ) matrix ( see e.g. , Gould et al . ( 2000 ) ) . Generally , the iterates are defined by Θk+1 = Θk − αk∇2f ( Θk ) −1∇f ( Θk ) , where αk > 0 is a steplength defined by a linesearch criterion ( Nocedal & Wright ( 2006 ) ) . In a DNN setting , we know that the number of parameters ( n ) of the network can be of the order of millions . Thus storing the Hessian which takes O ( n2 ) memory , becomes impractical . In addition , the inversion of the Hessian matrix , which takes O ( n3 ) operations , is also impractical . Even though Newton ’ s method achieves convergence in fewer steps , the method becomes computationally intractable to use on large-scale DNNs . Quasi-Newton methods are alternatives to Newton methods . They compute Hessian approximations , Bk+1 , that satisfy the secant condition given by yk = Bk+1sk , where sk = Θk+1 −Θk and yk = ∇f ( Θk+1 ) − ∇f ( Θk ) . The most commonly used quasi-Newton method , including in the realm of deep learning , is the limited-memory BFGS update , or L-BFGS ( see e.g. , Liu & Nocedal ( 1989 ) ) , where the Hessian approximation is given by Bk+1 = Bk + yky > k y > k sk − BkskskB > k s > k Bksk . ( 2 ) The generic L-BFGS quasi-Newton update scheme is described in Algorithm 1 , and numerous variants of L-BFGS exist ( see Goldfarb et al . ( 2020 ) ; Moritz et al . ( 2016 ) ; Gower et al . ( 2016 ) ) . One advantage of using an L-BFGS update is that the Hessian approximation can be guaranteed to be definite , which is highly suitable in line-search settings because the update sk is guaranteed to be a descent direction , meaning there is some step length along this direction that results in a decrease in the objective function ( see Nocedal & Wright ( 2006 ) , Algorithm 6.1 ) . However , because the L-BFGS update is positive definite , it does not readily detect directions of negative curvature for avoiding saddle points . In contrast , the Symmetric-Rank One ( SR1 ) quasi-Newton update is not guarateed to be positive definite and can result in ascent directions for line-search methods . However , in trust-region settings where indefinite Hessian approximations are an advantage because they capture directions of negative curvature , the limited-memory SR1 ( L-SR1 ) has been shown to outperform L-BFGS in DNNs for classification ( see Erway et al . ( 2020 ) ) . We discuss this in more detail in Section 2 but in the context of Adaptive Regularization using Cubics . Algorithm 1 L-BFGS Quasi-Newton Method with Line Search Require : Initial weights Θ0 , batch size d , learning rate α , dataset D , loss function f ( Θ ) . for k = 0 , 1 , 2 , . . . do Sample mini-batch of size d : Dk ⊆ D Perform the forward backward pass over the current mini-batch Compute the limited memory approximation Bk using ( 2 ) Compute step sk = αB−1k ∇Θf ( Θk ) , where α is the line-search step length end for 2 L-SR1 ADAPTIVE REGULARIZATION USING CUBICS METHOD . We begin by discussing the L-SR1 update and the adaptive regularizion using cubics methods for large-scale optimization . Unlike the BFGS update ( 2 ) , which is a rank-two update , the SR1 update is a rank-one update , which is given by Bk+1 = Bk + 1 s > k ( yk −Bksk ) ( yk −Bksk ) ( yk −Bksk ) > ( 3 ) ( see Khalfan et al . ( 1993 ) ) . As previously mentioned , Bk+1 in ( 3 ) is not guaranteed to be definite . However , it can be shown that the SR1 matrices can converge to the true Hessian ( see Conn et al . ( 1991 ) for details ) . We note that the pair ( sk , yk ) is accepted only when |s > k ( yk − Bksk ) | > ε‖yk −Bksk‖22 , for some constant ε > 0 ( see Nocedal & Wright ( 2006 ) , Sec . 6.2 , for details ) . The SR1 update can be defined recursively as Bk+1 = B0 + k∑ j=0 1 s > j ( yj −Bjsj ) ( yj −Bjsj ) ( yj −Bjsj ) > . ( 4 ) In limited-memory SR1 ( L-SR1 ) settings , only the last m n pairs of ( sj , yj ) are stored and used . If Sk+1 = [ s0 s1 · · · sk ] and Yk+1 = [ y0 y1 · · · yk ] , then Bk+1 admits a compact representation of the form Bk+1 = B0 + [ Ψk+1 ] [ Mk+1 ] [ Ψ > k+1 ] , ( 5 ) where Ψk+1 = Yk+1−B0Sk+1 and Mk+1 = ( Dk+1+Lk+1+L > k+1−S > k+1B0Sk+1 ) −1 , ( 6 ) where Lk+1 is the strictly lower triangular part , Uk+1 is the strictly upper triangular part , and Dk+1 is the diagonal part of S > k+1Yk+1 = Lk+1 + Dk+1 + Uk+1 ( see Byrd et al . ( 1994 ) for further details ) . Because of the compact representation of Bk+1 , its partial eigendecomposition can be computed ( see Burdakov et al . ( 2017 ) ) . In particular , if we compute the QR decomposition of Ψk+1 , then we can write Bk+1 = B0 = U‖Λ̂k+1U > ‖ , where U‖ ∈ R n× ( k+1 ) has orthonormal columns and Λ̂ ∈ R ( k+1 ) × ( k+1 ) is a diagonal matrix . If B0 = δkI ( see e.g. , Lemma 2.4 in Erway et al . ( 2020 ) ) , where 0 < δk < δmax is some scalar and I is the identity matrix , then we obtain the eigendecomposition Bk+1 = Uk+1Λk+1U > k+1 , where Uk+1 = [ U‖ U⊥ ] , with U⊥ ∈ Rn× ( n− ( k+1 ) ) and U > k+1Uk+1 = I . Here , ( Λk+1 ) i = δk + λ̂i for i ≤ k+ 1 , where λ̂i is the ith diagonal in Λ̂k+1 , and ( Λ ) i = δk for i > k + 1 . Since the SR1 Hessian approximation can be indefinite , some safeguard must be implemented to ensure that the resulting search direction sk is a descent direction . One such safeguard is to use a “ regularization ” term . The Adaptive Regularization using Cubics ( ARCs ) method ( Griewank ( 1981 ) ; Cartis et al . ( 2011 ) ) can be viewed as an alternative to line-search and trust-region methods . At each iteration , an approximate global minimizer of a local ( cubic ) model , min s∈Rn mk ( s ) ≡ g > k s + 1 2 s > Bks + µk 3 ( Φk ( s ) ) 3 , ( 7 ) is determined , where gk = ∇f ( Θk ) , µk > 0 is a regularization parameter , and Φk is a function ( norm ) that regularizes s. Typically , the Euclidean norm is used . In this work , we propose an alternative “ shape-changing ” norm that allows us to solve each subproblem ( 7 ) exactly . This shape-changing norm was proposed in Burdakov et al . ( 2017 ) , and it is based on the partial eigendecomposition of Bk . Specifically , if Bk = UkΛkU > k is the eigendecomposition of Bk , then we can define the norm ‖s‖Uk def = ‖U > k s‖3 . Applying a change of basis with s̄ = U > k s and ḡk = U > k gk , we can redefine the cubic subproblem as min s̄∈Rn m̄k ( s ) = ḡ > k s̄ + 1 2 s̄ > Λks̄ + µk 3 ‖s̄‖33 . ( 8 ) With this change of basis , we can find a closed-form solution of ( 8 ) easily . The proposed Adaptive Regularization using Cubics with L-SR1 ( ARCSLSR1 ) algorithm is given in Algorithm 2 . 2.1 CONTRIBUTIONS . The main contributions of this paper are as follows : 1 . L-SR1 quasi-Newton methods : The most commonly used quasi-Newton approach is the L-BFGS method . In this work , we use the LSR1 update to better model potentially indefinite Hessians of the non-convex loss function . 2 . Adaptive Regularization using Cubics ( ARCs ) : Given that the quasi-Newton approximation is allowed to be indefinite , we use an Adaptive Regularized using Cubics approach to safeguard each search direction . 3 . Shape-changing regularizer : We use a shape-changing norm to define the cubic regularization term , which allows us to compute the closed form solution to the cubic subproblem ( 7 ) . 4 . Computational complexity : Let m be the number of previous iterates and gradients stored in memory . The proposed LSR1 ARC approach is comparable to L-BFGS in terms of storage and compute complexity ( see Table 1 ) . Algorithm 2 Limited-Memory Symmetric Rank-1 Adaptive Regularization using Cubics 1 : Given : Θ0 , γ2 ≥ γ1 , 1 > η2 ≥ η1 > 0 , and σ0 > 0 2 : for k = 0 , 1 , 2 . . . do 3 : Obtain Sk = [ s0 · · · sk ] , Yk = [ y0 · · · yk ] 4 : Solve the generalized eigenproblem S > k Yku = Λ̂S > k Sku and let δk = min { λ̂i } 5 : Compute Ψk = Yk − δkSk 6 : Perform QR decomposition of Ψ = QR 7 : Compute the eigendecomposition RMR > = PΛP > 8 : Assign U‖ = QP and U > ‖ = P > Q > 9 : Define C‖ = diag ( c1 , . . . , cm ) , where ci = 2 λi+ √ λ2i +4µ|ḡi| and ḡ‖ = U > ‖ g 10 : Compute α∗ = 2 δk+ √ δ2k+4µ‖g⊥‖ where g⊥ = g −U‖ḡ‖ | The paper describes a limited memory quasi-Newton method based on SR1 updating using a variant on the adaptive regularized cubic (ARC) approach to globalization. The algorithm is applied to training deep neural networks for image classification and autoencoding, and compared to L-BFGS and various SGD variants. The authors claim the main contributions to be the different optimizer ingredients (L-SR1, ARC safeguarding, and the particular shape-changing regularizer) as well as computational complexity similar to L-BFGS. | SP:105266467c5692d681190e94f26a01f6cf0705ee |
L-SR1 Adaptive Regularization by Cubics for Deep Learning | 1 INTRODUCTION . Most deep learning problems involve minimization of the empirical risk of estimation min Θ f ( x ; Θ ) , ( 1 ) where Θ ∈ Rn is the set of weights and f is some scalar-valued loss function . To solve ( 1 ) , various optimization approaches have been implemented , which we describe below . Throughout this paper , we write f ( Θ ) and f ( x ; Θ ) interchangeably . Gradient and adaptive gradient methods are widely used for training deep neural networks ( DNN ) for their computational efficiency . The most common approach is Stochastic Gradient Descent ( SGD ) which , despite its simplicity , performs well over a wide range of applications . However , in a sparse training data setting , SGD performs poorly due to limited training speed ( Luo et al . ( 2019 ) ) . To address this problem , adaptive methods such as AdaGrad ( Duchi et al . ( 2011 ) ) , AdaDelta ( Zeiler ( 2012 ) ) , RMSProp ( Hinton et al . ( 2012 ) ) and Adam ( Kingma & Ba ( 2014 ) ) have been proposed . These methods take the root mean square of the past gradients to influence the current step . Amongst all of these adaptive methods , Adam is arguably the most widely used in a deep learning setting due to it rapid training speed . Newton ’ s method has the potential to exploit curvature information from the second-order derivative ( Hessian ) matrix ( see e.g. , Gould et al . ( 2000 ) ) . Generally , the iterates are defined by Θk+1 = Θk − αk∇2f ( Θk ) −1∇f ( Θk ) , where αk > 0 is a steplength defined by a linesearch criterion ( Nocedal & Wright ( 2006 ) ) . In a DNN setting , we know that the number of parameters ( n ) of the network can be of the order of millions . Thus storing the Hessian which takes O ( n2 ) memory , becomes impractical . In addition , the inversion of the Hessian matrix , which takes O ( n3 ) operations , is also impractical . Even though Newton ’ s method achieves convergence in fewer steps , the method becomes computationally intractable to use on large-scale DNNs . Quasi-Newton methods are alternatives to Newton methods . They compute Hessian approximations , Bk+1 , that satisfy the secant condition given by yk = Bk+1sk , where sk = Θk+1 −Θk and yk = ∇f ( Θk+1 ) − ∇f ( Θk ) . The most commonly used quasi-Newton method , including in the realm of deep learning , is the limited-memory BFGS update , or L-BFGS ( see e.g. , Liu & Nocedal ( 1989 ) ) , where the Hessian approximation is given by Bk+1 = Bk + yky > k y > k sk − BkskskB > k s > k Bksk . ( 2 ) The generic L-BFGS quasi-Newton update scheme is described in Algorithm 1 , and numerous variants of L-BFGS exist ( see Goldfarb et al . ( 2020 ) ; Moritz et al . ( 2016 ) ; Gower et al . ( 2016 ) ) . One advantage of using an L-BFGS update is that the Hessian approximation can be guaranteed to be definite , which is highly suitable in line-search settings because the update sk is guaranteed to be a descent direction , meaning there is some step length along this direction that results in a decrease in the objective function ( see Nocedal & Wright ( 2006 ) , Algorithm 6.1 ) . However , because the L-BFGS update is positive definite , it does not readily detect directions of negative curvature for avoiding saddle points . In contrast , the Symmetric-Rank One ( SR1 ) quasi-Newton update is not guarateed to be positive definite and can result in ascent directions for line-search methods . However , in trust-region settings where indefinite Hessian approximations are an advantage because they capture directions of negative curvature , the limited-memory SR1 ( L-SR1 ) has been shown to outperform L-BFGS in DNNs for classification ( see Erway et al . ( 2020 ) ) . We discuss this in more detail in Section 2 but in the context of Adaptive Regularization using Cubics . Algorithm 1 L-BFGS Quasi-Newton Method with Line Search Require : Initial weights Θ0 , batch size d , learning rate α , dataset D , loss function f ( Θ ) . for k = 0 , 1 , 2 , . . . do Sample mini-batch of size d : Dk ⊆ D Perform the forward backward pass over the current mini-batch Compute the limited memory approximation Bk using ( 2 ) Compute step sk = αB−1k ∇Θf ( Θk ) , where α is the line-search step length end for 2 L-SR1 ADAPTIVE REGULARIZATION USING CUBICS METHOD . We begin by discussing the L-SR1 update and the adaptive regularizion using cubics methods for large-scale optimization . Unlike the BFGS update ( 2 ) , which is a rank-two update , the SR1 update is a rank-one update , which is given by Bk+1 = Bk + 1 s > k ( yk −Bksk ) ( yk −Bksk ) ( yk −Bksk ) > ( 3 ) ( see Khalfan et al . ( 1993 ) ) . As previously mentioned , Bk+1 in ( 3 ) is not guaranteed to be definite . However , it can be shown that the SR1 matrices can converge to the true Hessian ( see Conn et al . ( 1991 ) for details ) . We note that the pair ( sk , yk ) is accepted only when |s > k ( yk − Bksk ) | > ε‖yk −Bksk‖22 , for some constant ε > 0 ( see Nocedal & Wright ( 2006 ) , Sec . 6.2 , for details ) . The SR1 update can be defined recursively as Bk+1 = B0 + k∑ j=0 1 s > j ( yj −Bjsj ) ( yj −Bjsj ) ( yj −Bjsj ) > . ( 4 ) In limited-memory SR1 ( L-SR1 ) settings , only the last m n pairs of ( sj , yj ) are stored and used . If Sk+1 = [ s0 s1 · · · sk ] and Yk+1 = [ y0 y1 · · · yk ] , then Bk+1 admits a compact representation of the form Bk+1 = B0 + [ Ψk+1 ] [ Mk+1 ] [ Ψ > k+1 ] , ( 5 ) where Ψk+1 = Yk+1−B0Sk+1 and Mk+1 = ( Dk+1+Lk+1+L > k+1−S > k+1B0Sk+1 ) −1 , ( 6 ) where Lk+1 is the strictly lower triangular part , Uk+1 is the strictly upper triangular part , and Dk+1 is the diagonal part of S > k+1Yk+1 = Lk+1 + Dk+1 + Uk+1 ( see Byrd et al . ( 1994 ) for further details ) . Because of the compact representation of Bk+1 , its partial eigendecomposition can be computed ( see Burdakov et al . ( 2017 ) ) . In particular , if we compute the QR decomposition of Ψk+1 , then we can write Bk+1 = B0 = U‖Λ̂k+1U > ‖ , where U‖ ∈ R n× ( k+1 ) has orthonormal columns and Λ̂ ∈ R ( k+1 ) × ( k+1 ) is a diagonal matrix . If B0 = δkI ( see e.g. , Lemma 2.4 in Erway et al . ( 2020 ) ) , where 0 < δk < δmax is some scalar and I is the identity matrix , then we obtain the eigendecomposition Bk+1 = Uk+1Λk+1U > k+1 , where Uk+1 = [ U‖ U⊥ ] , with U⊥ ∈ Rn× ( n− ( k+1 ) ) and U > k+1Uk+1 = I . Here , ( Λk+1 ) i = δk + λ̂i for i ≤ k+ 1 , where λ̂i is the ith diagonal in Λ̂k+1 , and ( Λ ) i = δk for i > k + 1 . Since the SR1 Hessian approximation can be indefinite , some safeguard must be implemented to ensure that the resulting search direction sk is a descent direction . One such safeguard is to use a “ regularization ” term . The Adaptive Regularization using Cubics ( ARCs ) method ( Griewank ( 1981 ) ; Cartis et al . ( 2011 ) ) can be viewed as an alternative to line-search and trust-region methods . At each iteration , an approximate global minimizer of a local ( cubic ) model , min s∈Rn mk ( s ) ≡ g > k s + 1 2 s > Bks + µk 3 ( Φk ( s ) ) 3 , ( 7 ) is determined , where gk = ∇f ( Θk ) , µk > 0 is a regularization parameter , and Φk is a function ( norm ) that regularizes s. Typically , the Euclidean norm is used . In this work , we propose an alternative “ shape-changing ” norm that allows us to solve each subproblem ( 7 ) exactly . This shape-changing norm was proposed in Burdakov et al . ( 2017 ) , and it is based on the partial eigendecomposition of Bk . Specifically , if Bk = UkΛkU > k is the eigendecomposition of Bk , then we can define the norm ‖s‖Uk def = ‖U > k s‖3 . Applying a change of basis with s̄ = U > k s and ḡk = U > k gk , we can redefine the cubic subproblem as min s̄∈Rn m̄k ( s ) = ḡ > k s̄ + 1 2 s̄ > Λks̄ + µk 3 ‖s̄‖33 . ( 8 ) With this change of basis , we can find a closed-form solution of ( 8 ) easily . The proposed Adaptive Regularization using Cubics with L-SR1 ( ARCSLSR1 ) algorithm is given in Algorithm 2 . 2.1 CONTRIBUTIONS . The main contributions of this paper are as follows : 1 . L-SR1 quasi-Newton methods : The most commonly used quasi-Newton approach is the L-BFGS method . In this work , we use the LSR1 update to better model potentially indefinite Hessians of the non-convex loss function . 2 . Adaptive Regularization using Cubics ( ARCs ) : Given that the quasi-Newton approximation is allowed to be indefinite , we use an Adaptive Regularized using Cubics approach to safeguard each search direction . 3 . Shape-changing regularizer : We use a shape-changing norm to define the cubic regularization term , which allows us to compute the closed form solution to the cubic subproblem ( 7 ) . 4 . Computational complexity : Let m be the number of previous iterates and gradients stored in memory . The proposed LSR1 ARC approach is comparable to L-BFGS in terms of storage and compute complexity ( see Table 1 ) . Algorithm 2 Limited-Memory Symmetric Rank-1 Adaptive Regularization using Cubics 1 : Given : Θ0 , γ2 ≥ γ1 , 1 > η2 ≥ η1 > 0 , and σ0 > 0 2 : for k = 0 , 1 , 2 . . . do 3 : Obtain Sk = [ s0 · · · sk ] , Yk = [ y0 · · · yk ] 4 : Solve the generalized eigenproblem S > k Yku = Λ̂S > k Sku and let δk = min { λ̂i } 5 : Compute Ψk = Yk − δkSk 6 : Perform QR decomposition of Ψ = QR 7 : Compute the eigendecomposition RMR > = PΛP > 8 : Assign U‖ = QP and U > ‖ = P > Q > 9 : Define C‖ = diag ( c1 , . . . , cm ) , where ci = 2 λi+ √ λ2i +4µ|ḡi| and ḡ‖ = U > ‖ g 10 : Compute α∗ = 2 δk+ √ δ2k+4µ‖g⊥‖ where g⊥ = g −U‖ḡ‖ | This paper investigates the application of a certain Quasi-Newton algorithm, the Limited-Memory Symmetric Rank-1 (L-SR1) algorithm, in deep learning problems. The benefit of this technique over similar more widely investigated methods that use a positive definite approximation of the Hessian, such as stochastic L-BFGS, is the fact that the L-SR1 approximation is not guaranteed to be definite, and thus has the potential to have a more accurate approximation of the true Hessian. Since in this case line-search methods may return ascent directions, authors propose a specific form of Adaptive Regularization using Cubics (ARC) as an alternative. Numerical simulations are provided comparing the performance of the proposed algorithm to SGD, adaptive methods such as Adam and a naive L-BFGS implementation. | SP:105266467c5692d681190e94f26a01f6cf0705ee |
Learning Rich Nearest Neighbor Representations from Self-supervised Ensembles | 1 INTRODUCTION . The widespread application of pretrained convolutional neural networks in computer vision is one of the most important tools in the field . State-of-the-art on many benchmarks ranging from classification , to object detection , to pose estimation has been set using a pretrained model , such as an ImageNet classifier , as a network initialization ( Kornblith et al. , 2019 ; He et al. , 2019 ; Chen et al. , 2020a ; Grill et al. , 2020 ; Kolesnikov et al. , 2019 ) . Transfer learning is an entire field focused on studying and utilizing this phenomenon . While supervised ImageNet classifiers were the dominant feature extractors of choice for many years , recently self-supervised models have begun to take their place . Methods such as MoCo ( v2 ) , SimCLR ( v2 ) , SimSiam , PIRL , Swav , BYOL , and Barlow Twins all claim transferability competitive with or superior to that of ImageNet classifiers ( He et al. , 2019 ; Chen et al. , 2020c ; a ; b ; Chen & He , 2020 ; Misra & Maaten , 2020 ; Caron et al. , 2021 ; Grill et al. , 2020 ; Zbontar et al. , 2021 ) . As such , the question of what initialization to use has arisen ; benchmark studies have sought to compare methods under dozens of different settings ( Goyal et al. , 2019 ; Zhai et al. , 2019 ) . Even when a decision has been made to use a particular feature extractor , the utility and knowledge of other options is then left unutilized . To address this concern , we consider ensembling , a common practice in the supervised setting ( Dietterich , 2000 ; Hinton et al. , 2015 ) . Ensembling models involves combining the predictions obtained by multiple different models in some way , typically by averaging or a similar operation . In the supervised setting , such outputs are aligned and such an operation easily captures the combined knowledge of the models . In the self-supervised setting , however , such alignment is not guaranteed , particularly when dealing with independently trained contrastive learners which many pretrained models of choice are . Averaging the features still is useful , and obtains reasonably strong image representations ( Section 4 ) , but we show that it is possible to build an ensembling strategy that yields richer , stronger representations than the mean feature . We do so without training any new CNN components , allowing for the same frozen backbone to be used across applications . How to approach ensembling in the self-supervised setting ? We contend that the goal of a model ensemble is to capture the useful information provided by the different models in a single representation . We consider the “ capture '' of information from a recoverability perspective : if every network ’ s features can be recovered by some fixed operation on a representation vector , for all data samples , Algorithm 1 PyTorch-like pseudocode of our method # Training Phase # Initialize representations to average feature train_feature_list = [ net ( images ) for net in ensemble ] avg_feat = average ( train_feature_list ) # shape = ( n_points x feature_dimension ) learned_train_reps = Parameter ( avg_feat.detach ( ) ) # initialize params with avg_feat mlps = [ MLP ( ) for net in ensemble ] # 1 net per feature extractor opt = SGD ( mlps.parameters ( ) + learned_train_reps ) # optimize both mapping MLPs and input representations for images_idx , images in trainloader : ensemble_feats = [ net ( images ) for net in ensemble ] # Get ensemble features outputs = [ mlp ( learned_train_reps [ images_idx ] for mlp in mlps ] # Map learned representations through different MLPs loss = cosine_loss ( ensemble_feats , outputs ) loss.backward ( ) # Inference Phase test_feature_list = [ net ( images ) for net in ensemble ] learned_test_reps = average ( test_feature_list ) opt = SGD ( learned_test_reps ) # Freeze MLPs at inference time for images_idx , images in testloader : # Same as training loop MLP ( ) : a multi-layer perceptron model Parameter ( t ) : Pytorch function that takes the argument array t and initializes trainable parameters with that value then such representations are useful . While concatenation of features can trivially achieve this object , we show that such an operation is in fact suboptimal in terms of the behavior of the derived feature space . We instead propose to directly learn via gradient descent a set of representations that contain all of the information necessary to derive the ensemble features . Our architecture is shown in Figure 1 , with example pseudocode in Algorithm 1 . Specifically , we show that extracting features from an ensemble of self-supervised models using this technique improves the nearest neighbor ( NN ) performance when evaluated on downstream supervised tasks . 2 RELATED WORK . Supervised ensembling : It is a ubiquitous technique in machine learning ( Dietterich , 2000 ) . In addition to the large number of online contests won through such approaches ( Andres ) , ensembling has also been demonstrated to achieve state-of-the-art performance on standard computer vision benchmarks ( Huang et al. , 2017 ) . Most approaches employ Bayesian ensembling , where the predictions of separate networks are averaged ( Wu et al. , 2021 ; Hinton et al. , 2015 ; Dietterich , 2000 ) . While this relied on the alignment of objectives between the networks , we show that such an averaging on the intermediate features does indeed generate representations superior to that of individual models . Our method differs from this literature , however , in the learning done on top of the ensemble as well as the no-label setting of our representation learning . Knowledge distillation ( Hinton et al. , 2015 ) : This is a related vein of work is that of where the knowledge of an ensemble is distilled into a single neural network , or a single model is distilled into another network . This second version , known as self-distillation ( Zhang et al. , 2019 ) , is highly relevant to our single-model setting , as we are able to obtain improvements while operating purely in the feature space of a single model . Our goal is not to discard the ensembled models as it is in knowledge distillation , but our method bears similarities to that of Hydra ( Mandt et al . ) , where a network is trained to output representations capable of recovering the ensembled outputs . We note that our resulting accuracies consistently surpass average ensembling , a baseline that Hydra considers as an upper-bound to their method ( but the differing settings do not lend themselves to apples-to-apples comparisons ) . Xu et al . ( 2020 ) propose a self-supervised knowledge distillation algorithm that uses both supervised and self-supervised loss to train a teacher network , and then distill this combined knowledge to a student network . They show that this combination improves the student performance compared to traditional KD . In contrast to their work , our goal is not to learn a separate student network and we do not assume access to labeled data , but instead , our main goal is to extract rich nearest neighbor representations from an ensemble of self-supervised models . k-Nearest neighbor ( k-NN ) classifiers : k-NN is a non-parametric classifier which has been shown to be a consistent approximator ( Devroye et al . ( 1996 ) ) , i.e. , asymptotically its empirical risk goes to zero as k →∞ and k/N → 0 , where N is the number of training samples . While these theoretical conditions may not be true in practice , a main advantage of using k-NN is that it is parameterfree outside of the choice of k ( which typically does not change qualitative rankings ) and thus is a consistent and easily replicable measurement of representation quality . Additionally , k-NN makes the decision process interpretable ( Papernot & McDaniel , 2018 ; Dalitz , 2009 ) , which is important in various applications ( Mehrabi et al. , 2021 ; Vellido , 2019 ; Gilpin et al. , 2018 ) . For these reasons , our paper focuses on extracting rich features from self-supervised ensembles that are condusive to k-NN classification . On a related note , SLADE ( Duan et al. , 2021 ) leverages unlabeled data to improve distance metric learning , which is essentially the setting of our evaluation framework . While the goal is similar , SLADE uses supervised training to initialize learning and generate pseudolabels for the unlabeled points , our method assumes zero label access . Additionally , SLADE is concerned with learning a new network from scratch , as opposed to an ensembling framework . Pretrained Models : Our method relies on the efficacy of pretrained self-supervised models . Specific methods we employ are SimCLR , SwAV , Barlow Twins , RotNet , PIRL , as well as traditional label supervision . In addition to the above works : Goyal et al . ( 2019 ) ; Zhai et al . ( 2019 ) ; Kolesnikov et al . ( 2019 ) benchmark and demonstrate the generalization efficacy of pretrained models in various settings . Gradient descent at inference time : One atypical facet to our method is the use of gradient descent at inference time to directly learn new representations . While this approach is quite uncommon , we are not the first to leverage backpropagation at inference-time . Zadeh et al . ( 2019 ) uses backpropagation at inference time to learn representations from images using a generative “ auto-decoder '' framework and Park et al . ( 2019 ) ; Sitzmann et al . ( 2020 ) employ similar approaches to learn implicit representations of shapes . Sun et al . ( 2020 ) considers new samples as one-image self-supervised learning problems , and perform a brief optimization of a self-supervised learning objective on a new image ( modifying the feature extractor ’ s parameters ) before performing inference , and Shocher et al . ( 2018 ) train a CNN at test-time for the purpose of super-resolution . 3 METHODS . We present a method for directly learning rich ensembled unsupervised representations of images through gradient descent . Consider a training collection of imagesX = { xi } ni=1 and an ensemble of convolutional neural networks feature extractors Θ = { θj } mj=1 . In this work the θj have previously been trained in a self-supervised manner on ImageNet classification and are ResNet-50s ( Deng et al. , 2009 ; He et al. , 2016 ) . Denote the L2-normalized features obtained by removing the linear/MLP heads of these networks and extracting intermediate features post-pooling ( and ReLU ) as Z = { { z ( j ) i } ni=1 } mj=1 , here z ( j ) i denotes the intermediate feature corresponding to θj ( xi ) . We initialize a memory bank of representations of X , with one entry for each xi these entries have the same feature dimensionality as the zji . This memory bank is analagous to the type use in early contrastive learning works such as Wu et al . ( 2018 ) . We denote this memory bank Ψ = { ψk } nk=1 . Each ψk is initialized to the L2-normalized average representation of the ensemble ψk = ∑m j=1 z j k || ∑m j=1 z j k|| , note that the sum operation is equivalent to averaging due to the normalization being performed . To map the memory bank to the ensembled features , we employ a set of multi-layer perceptrons ( MLPs ) , Φ = { φ ` } m ` =1 , each corresponding to a feature extractor θj . Unless noted otherwise in our experiments , these φ ` are 2 layers , both of output dimension the same as their input ( 2048 for ResNet50 features ) . ReLU activations are used after both layers , the first as a traditional activation function , and the second to align the network in mapping to the post-relu set Z . During training , a batch of images { xi } i∈I are sampled with indices I ⊂ { 1 ... n } . The corresponding ensemble features , ZI = { { z ( j ) i } i∈I } mj=1 , are retrieved as are the memory bank representations ΨI = { ψk } k∈I . Note that no image augmentations are included in our framework , meaning that the z ( j ) i are typically cached to lessen computational complexity . Each banked representation is then fed through each of the m MLPs , Φ , resulting in a set of mapped representations Φ ( ΨI ) = { φ ` ( ψi ) } ` ∈ { 1 ... m } , i∈I . The goal of the training is to maximize the alignment of these mapped features Φ ( ΨI ) with the original ensemble features ZI . This is done by training both the networks Φ and the representations Ψ using a cosine loss between Φ ( ΨI ) and ZI , gradients are computed for both the MLPs and representations for each batch . Once training is completed , the φ ` are frozen . During inference , when a new image x′ is given , the above process is repeated with the frozen MLPs . Concretely , the ensemble features φ ` ( x′ ) are computed and averaged to initialize a new representation ψ′ . ψ′ is then optimized via gradient descent to maximize the cosine similarity of each φ ` ( ψ′ ) with θ ` ( x′ ) , ψ′ then serves as our representation of x′ . The described method results in representations superior to either the average or concatenated feature in terms of nearest-neighbor accuracy . We highlight several exciting aspects of our method : • The learning of a representation at test time via gradient descent is an uncommon approach . Existing methods do exist , such as auto-decoders in Zadeh et al . ( 2019 ) or implicit 3D representation literature ( Park et al. , 2019 ; Sitzmann et al. , 2020 ) . There are also techniques , such as Sun et al . ( 2020 ) for generalization , which use gradients to shape the parameters prior to inferenc . • The vast majority of ensembling literature focuses on the supervised setting , where the training objectives of the ensembled networks are identical and thus aligned . Very little work has been performed on improving a group of self-supervised features with outside auxiliary signal . Hydra ( Mandt et al . ) considers a similar setting , but with a focus on knowledge distillation of the ensemble into a single network . • Our method is extremely adaptable to different settings . Networks trained on multiple objectives , with different head architectures , can be usefully ensembled as demonstrated in Sec . 4 . This could trivially be extended to using networks of different architectures as well ( e.g . VGG + ResNet + AlexNet ) ( Simonyan & Zisserman , 2014 ; He et al. , 2016 ; Krizhevsky et al. , 2012 ) . The flexibility of our approach additionally extends to the input data . While we use networks pretrained on ImageNet , the ensembling provides benefit on both ImageNet as well in the self-supervised transfer learning setting . This transfer can either be performed by training new Φ on the target dataset or by training Φ on ImageNet and then using the frozen MLPs to learn representations on the target dataset . • Our method as presented requires only a single forward pass of each image through each ensembled model as we do not use data augmentation . This allows caching of CNN features and fast training . | This paper propose a new way to learn self-supervised model ensembling. Their novel approach learns representations via gradient descent directly at inference time after having pretrained feature extractors. And the authors conduct a series of experiments to show the efficacy of their method. | SP:5867aa8bb3eaaf5f916865d662be8667318a514f |
Learning Rich Nearest Neighbor Representations from Self-supervised Ensembles | 1 INTRODUCTION . The widespread application of pretrained convolutional neural networks in computer vision is one of the most important tools in the field . State-of-the-art on many benchmarks ranging from classification , to object detection , to pose estimation has been set using a pretrained model , such as an ImageNet classifier , as a network initialization ( Kornblith et al. , 2019 ; He et al. , 2019 ; Chen et al. , 2020a ; Grill et al. , 2020 ; Kolesnikov et al. , 2019 ) . Transfer learning is an entire field focused on studying and utilizing this phenomenon . While supervised ImageNet classifiers were the dominant feature extractors of choice for many years , recently self-supervised models have begun to take their place . Methods such as MoCo ( v2 ) , SimCLR ( v2 ) , SimSiam , PIRL , Swav , BYOL , and Barlow Twins all claim transferability competitive with or superior to that of ImageNet classifiers ( He et al. , 2019 ; Chen et al. , 2020c ; a ; b ; Chen & He , 2020 ; Misra & Maaten , 2020 ; Caron et al. , 2021 ; Grill et al. , 2020 ; Zbontar et al. , 2021 ) . As such , the question of what initialization to use has arisen ; benchmark studies have sought to compare methods under dozens of different settings ( Goyal et al. , 2019 ; Zhai et al. , 2019 ) . Even when a decision has been made to use a particular feature extractor , the utility and knowledge of other options is then left unutilized . To address this concern , we consider ensembling , a common practice in the supervised setting ( Dietterich , 2000 ; Hinton et al. , 2015 ) . Ensembling models involves combining the predictions obtained by multiple different models in some way , typically by averaging or a similar operation . In the supervised setting , such outputs are aligned and such an operation easily captures the combined knowledge of the models . In the self-supervised setting , however , such alignment is not guaranteed , particularly when dealing with independently trained contrastive learners which many pretrained models of choice are . Averaging the features still is useful , and obtains reasonably strong image representations ( Section 4 ) , but we show that it is possible to build an ensembling strategy that yields richer , stronger representations than the mean feature . We do so without training any new CNN components , allowing for the same frozen backbone to be used across applications . How to approach ensembling in the self-supervised setting ? We contend that the goal of a model ensemble is to capture the useful information provided by the different models in a single representation . We consider the “ capture '' of information from a recoverability perspective : if every network ’ s features can be recovered by some fixed operation on a representation vector , for all data samples , Algorithm 1 PyTorch-like pseudocode of our method # Training Phase # Initialize representations to average feature train_feature_list = [ net ( images ) for net in ensemble ] avg_feat = average ( train_feature_list ) # shape = ( n_points x feature_dimension ) learned_train_reps = Parameter ( avg_feat.detach ( ) ) # initialize params with avg_feat mlps = [ MLP ( ) for net in ensemble ] # 1 net per feature extractor opt = SGD ( mlps.parameters ( ) + learned_train_reps ) # optimize both mapping MLPs and input representations for images_idx , images in trainloader : ensemble_feats = [ net ( images ) for net in ensemble ] # Get ensemble features outputs = [ mlp ( learned_train_reps [ images_idx ] for mlp in mlps ] # Map learned representations through different MLPs loss = cosine_loss ( ensemble_feats , outputs ) loss.backward ( ) # Inference Phase test_feature_list = [ net ( images ) for net in ensemble ] learned_test_reps = average ( test_feature_list ) opt = SGD ( learned_test_reps ) # Freeze MLPs at inference time for images_idx , images in testloader : # Same as training loop MLP ( ) : a multi-layer perceptron model Parameter ( t ) : Pytorch function that takes the argument array t and initializes trainable parameters with that value then such representations are useful . While concatenation of features can trivially achieve this object , we show that such an operation is in fact suboptimal in terms of the behavior of the derived feature space . We instead propose to directly learn via gradient descent a set of representations that contain all of the information necessary to derive the ensemble features . Our architecture is shown in Figure 1 , with example pseudocode in Algorithm 1 . Specifically , we show that extracting features from an ensemble of self-supervised models using this technique improves the nearest neighbor ( NN ) performance when evaluated on downstream supervised tasks . 2 RELATED WORK . Supervised ensembling : It is a ubiquitous technique in machine learning ( Dietterich , 2000 ) . In addition to the large number of online contests won through such approaches ( Andres ) , ensembling has also been demonstrated to achieve state-of-the-art performance on standard computer vision benchmarks ( Huang et al. , 2017 ) . Most approaches employ Bayesian ensembling , where the predictions of separate networks are averaged ( Wu et al. , 2021 ; Hinton et al. , 2015 ; Dietterich , 2000 ) . While this relied on the alignment of objectives between the networks , we show that such an averaging on the intermediate features does indeed generate representations superior to that of individual models . Our method differs from this literature , however , in the learning done on top of the ensemble as well as the no-label setting of our representation learning . Knowledge distillation ( Hinton et al. , 2015 ) : This is a related vein of work is that of where the knowledge of an ensemble is distilled into a single neural network , or a single model is distilled into another network . This second version , known as self-distillation ( Zhang et al. , 2019 ) , is highly relevant to our single-model setting , as we are able to obtain improvements while operating purely in the feature space of a single model . Our goal is not to discard the ensembled models as it is in knowledge distillation , but our method bears similarities to that of Hydra ( Mandt et al . ) , where a network is trained to output representations capable of recovering the ensembled outputs . We note that our resulting accuracies consistently surpass average ensembling , a baseline that Hydra considers as an upper-bound to their method ( but the differing settings do not lend themselves to apples-to-apples comparisons ) . Xu et al . ( 2020 ) propose a self-supervised knowledge distillation algorithm that uses both supervised and self-supervised loss to train a teacher network , and then distill this combined knowledge to a student network . They show that this combination improves the student performance compared to traditional KD . In contrast to their work , our goal is not to learn a separate student network and we do not assume access to labeled data , but instead , our main goal is to extract rich nearest neighbor representations from an ensemble of self-supervised models . k-Nearest neighbor ( k-NN ) classifiers : k-NN is a non-parametric classifier which has been shown to be a consistent approximator ( Devroye et al . ( 1996 ) ) , i.e. , asymptotically its empirical risk goes to zero as k →∞ and k/N → 0 , where N is the number of training samples . While these theoretical conditions may not be true in practice , a main advantage of using k-NN is that it is parameterfree outside of the choice of k ( which typically does not change qualitative rankings ) and thus is a consistent and easily replicable measurement of representation quality . Additionally , k-NN makes the decision process interpretable ( Papernot & McDaniel , 2018 ; Dalitz , 2009 ) , which is important in various applications ( Mehrabi et al. , 2021 ; Vellido , 2019 ; Gilpin et al. , 2018 ) . For these reasons , our paper focuses on extracting rich features from self-supervised ensembles that are condusive to k-NN classification . On a related note , SLADE ( Duan et al. , 2021 ) leverages unlabeled data to improve distance metric learning , which is essentially the setting of our evaluation framework . While the goal is similar , SLADE uses supervised training to initialize learning and generate pseudolabels for the unlabeled points , our method assumes zero label access . Additionally , SLADE is concerned with learning a new network from scratch , as opposed to an ensembling framework . Pretrained Models : Our method relies on the efficacy of pretrained self-supervised models . Specific methods we employ are SimCLR , SwAV , Barlow Twins , RotNet , PIRL , as well as traditional label supervision . In addition to the above works : Goyal et al . ( 2019 ) ; Zhai et al . ( 2019 ) ; Kolesnikov et al . ( 2019 ) benchmark and demonstrate the generalization efficacy of pretrained models in various settings . Gradient descent at inference time : One atypical facet to our method is the use of gradient descent at inference time to directly learn new representations . While this approach is quite uncommon , we are not the first to leverage backpropagation at inference-time . Zadeh et al . ( 2019 ) uses backpropagation at inference time to learn representations from images using a generative “ auto-decoder '' framework and Park et al . ( 2019 ) ; Sitzmann et al . ( 2020 ) employ similar approaches to learn implicit representations of shapes . Sun et al . ( 2020 ) considers new samples as one-image self-supervised learning problems , and perform a brief optimization of a self-supervised learning objective on a new image ( modifying the feature extractor ’ s parameters ) before performing inference , and Shocher et al . ( 2018 ) train a CNN at test-time for the purpose of super-resolution . 3 METHODS . We present a method for directly learning rich ensembled unsupervised representations of images through gradient descent . Consider a training collection of imagesX = { xi } ni=1 and an ensemble of convolutional neural networks feature extractors Θ = { θj } mj=1 . In this work the θj have previously been trained in a self-supervised manner on ImageNet classification and are ResNet-50s ( Deng et al. , 2009 ; He et al. , 2016 ) . Denote the L2-normalized features obtained by removing the linear/MLP heads of these networks and extracting intermediate features post-pooling ( and ReLU ) as Z = { { z ( j ) i } ni=1 } mj=1 , here z ( j ) i denotes the intermediate feature corresponding to θj ( xi ) . We initialize a memory bank of representations of X , with one entry for each xi these entries have the same feature dimensionality as the zji . This memory bank is analagous to the type use in early contrastive learning works such as Wu et al . ( 2018 ) . We denote this memory bank Ψ = { ψk } nk=1 . Each ψk is initialized to the L2-normalized average representation of the ensemble ψk = ∑m j=1 z j k || ∑m j=1 z j k|| , note that the sum operation is equivalent to averaging due to the normalization being performed . To map the memory bank to the ensembled features , we employ a set of multi-layer perceptrons ( MLPs ) , Φ = { φ ` } m ` =1 , each corresponding to a feature extractor θj . Unless noted otherwise in our experiments , these φ ` are 2 layers , both of output dimension the same as their input ( 2048 for ResNet50 features ) . ReLU activations are used after both layers , the first as a traditional activation function , and the second to align the network in mapping to the post-relu set Z . During training , a batch of images { xi } i∈I are sampled with indices I ⊂ { 1 ... n } . The corresponding ensemble features , ZI = { { z ( j ) i } i∈I } mj=1 , are retrieved as are the memory bank representations ΨI = { ψk } k∈I . Note that no image augmentations are included in our framework , meaning that the z ( j ) i are typically cached to lessen computational complexity . Each banked representation is then fed through each of the m MLPs , Φ , resulting in a set of mapped representations Φ ( ΨI ) = { φ ` ( ψi ) } ` ∈ { 1 ... m } , i∈I . The goal of the training is to maximize the alignment of these mapped features Φ ( ΨI ) with the original ensemble features ZI . This is done by training both the networks Φ and the representations Ψ using a cosine loss between Φ ( ΨI ) and ZI , gradients are computed for both the MLPs and representations for each batch . Once training is completed , the φ ` are frozen . During inference , when a new image x′ is given , the above process is repeated with the frozen MLPs . Concretely , the ensemble features φ ` ( x′ ) are computed and averaged to initialize a new representation ψ′ . ψ′ is then optimized via gradient descent to maximize the cosine similarity of each φ ` ( ψ′ ) with θ ` ( x′ ) , ψ′ then serves as our representation of x′ . The described method results in representations superior to either the average or concatenated feature in terms of nearest-neighbor accuracy . We highlight several exciting aspects of our method : • The learning of a representation at test time via gradient descent is an uncommon approach . Existing methods do exist , such as auto-decoders in Zadeh et al . ( 2019 ) or implicit 3D representation literature ( Park et al. , 2019 ; Sitzmann et al. , 2020 ) . There are also techniques , such as Sun et al . ( 2020 ) for generalization , which use gradients to shape the parameters prior to inferenc . • The vast majority of ensembling literature focuses on the supervised setting , where the training objectives of the ensembled networks are identical and thus aligned . Very little work has been performed on improving a group of self-supervised features with outside auxiliary signal . Hydra ( Mandt et al . ) considers a similar setting , but with a focus on knowledge distillation of the ensemble into a single network . • Our method is extremely adaptable to different settings . Networks trained on multiple objectives , with different head architectures , can be usefully ensembled as demonstrated in Sec . 4 . This could trivially be extended to using networks of different architectures as well ( e.g . VGG + ResNet + AlexNet ) ( Simonyan & Zisserman , 2014 ; He et al. , 2016 ; Krizhevsky et al. , 2012 ) . The flexibility of our approach additionally extends to the input data . While we use networks pretrained on ImageNet , the ensembling provides benefit on both ImageNet as well in the self-supervised transfer learning setting . This transfer can either be performed by training new Φ on the target dataset or by training Φ on ImageNet and then using the frozen MLPs to learn representations on the target dataset . • Our method as presented requires only a single forward pass of each image through each ensembled model as we do not use data augmentation . This allows caching of CNN features and fast training . | The paper proposes a new self-supervised ensembling method to get a better feature representation. Instead of using the conventional averaging or concatenating to ensemble multiple features, this paper proposes a different inference scheme where the backbone is also updated during the test time in a self supervised fashion. Experiments are conducted to empirically show the superiority of their methods compared to some existing methods. | SP:5867aa8bb3eaaf5f916865d662be8667318a514f |
Toward Faithful Case-based Reasoning through Learning Prototypes in a Nearest Neighbor-friendly Space. | 1 INTRODUCTION . With the ever-increasing usage of deep learning in real-world situations , interpretable and explainable machine learning has become more and more important . This is because the deeply complex nature and large number of parameters in deep learning models raise concerns about reliability , fairness , transparency , and trust . While some research has been focused on explaining the current ” black-box ” models by offering extra insight about their decisions ( Ribeiro et al. , 2016 ; Lundberg & Lee , 2017 ; Selvaraju et al. , 2017 ) , another approach is to create models that are inherently interpretable ( Rudin , 2019 ) . Some classes of machine learning models are deemed to be inherently more interpretable ( Doshi-Velez & Kim , 2017 ) . These include linear models like logistic regression , decision trees , rules-based models , naive bayes , and nearest neighbor models . These models are usually simple or abstract enough that a human can get an understanding of how they make specific decisions . Decisions made by nearest neighbor methods , in particular , can be easily understood by a human . But an important disadvantage of nearest neighbor models is that their inference computational cost scales with the amount of training data . This can make them impractical when the dataset is too large . One of the main ways suggested to combat the cost of nearest neighbor models is to use prototypes . Prototypes are points in the dataset that represent a group of points around them . If the number of prototypes is smaller than the number of data points , using them as a proxy of real data for nearest neighbor methods could significantly speed up the inference process . Depending on the number and the position of prototypes , performance differences compared to using the whole dataset can be minimal . Using prototypes for classification is not a new concept . Research was done on nearest prototype classifiers ( Bezdek & Castelaz , 1977 ; Kohonen , 1990 ; Seo et al. , 2003 ) and their validity ( Kuncheva & Bezdek , 1998 ; Graf et al. , 2009 ) . One important domain where these types of classifiers are used is in interpretable classification ( Bien & Tibshirani , 2011 ; Kim et al. , 2014 ; Chen et al. , 2018 ; Li et al. , 2018 ) . Recently Li et al . ( 2018 ) presented an explainable neural network structure that represents samples by their distance to some prototypes learned in an embedding space . A fully connected layer gets these distances as input and outputs the final class . It learns the prototypes along with the network weights . However , the drawback of that method is that it learns the prototypes in a poorly constructed embedding space . Because the structural similarities in the embedding space are not preserved , prototypes are less likely to be surrounded by a group of similar samples . Since these prototypes are the explanations for the model , naturally one may expect that the prototypes should represent unknown homogenous clusters that might exist within classes ( i.e . subclasses ) . For example different breeds of dogs within the class dog . We empirically show that Li et al . ( 2018 ) fails to preserve such subclasses and even destroys them . We argue that this is due to the lack of a mechanism for learning a proper embedding space in conjugation with learning the prototype positions . In this paper , we offer an interpretable prototype classification method called Deep Embedded Prototype Network ( DEPN ) . This method is a neural network architecture that finds a good embedding space and a number of suitable prototypes for classification . The embedding space and the prototypes in DEPN are constructed in such a way that high classification performance is achieved . In addition , the decisions of the classifier can be explained by showing the nearest prototypes as a comparative example . We show that the combination of our embedding space and prototype selection method leads to cohesive prototype explanations compared to the state-of-the-art Li et al . ( 2018 ) . The embedding space of DEPN is formed such that samples that are closer to each other in terms of Euclidean distance are more similar to each other than samples that are further away . It is because the embedding space is learned based on the nearest-neighbor rule . In particular , following ( Chauhan et al. , 2021 ) an upper bound of the classification error using nearest-neighbour classifier is minimized . While it is very hard to find annotated datasets containing similarity measures between each pair of samples , it is possible to create datasets that contain superclasses each consisting of multiple subclasses . Subclass similarity is , therefore , a good proxy for overall sample similarity . The advantage of ( Chauhan et al. , 2021 ) is that it has been shown to be able to create embedding spaces where even unlabeled subclasses of the data distribution are preserved in the embedding space . This happens despite the model not having any direct knowledge about subclass data . This means that for example , if one of the classes in the data consists of two subclasses , it is less likely to mix these two subclasses together in the resulting embedding space . For example , suppose that the problem is the classification between cats and dogs using only a picture of the animal . If the dataset consists of multiple dog and cat breeds , we can assume that it contains multiple subclasses of cats and dogs . If the nearest neighbors of a dog sample in the embedding space are other dog samples that belong to the same breed ( subclass ) , the closest samples to a prototype might also belong to the same subclass . We believe that if the subclasses of the closest prototype used to classify a new sample is the same as that of that sample , the prototype is a better explanation for the decision taken by the model compared to when the overall class label is correct , but the subclass shows a different breed of the animal . It is important to note that the method we provide does not have access to the subclass data . In fact , it doesn ’ t need the subclass labels to be present at all . The final result is a neural network that classifies the input using a number of prototypes . It can explain its decisions by offering the closest prototype of that class to the human user . Due to the way our embedding space preserves subclass information , the offered explanation is much more likely to be suitable compared to the previous state-of-the-art method Li et al . ( 2018 ) . In particular , our measurements show that our work preserves the subclass neighborhoods in the embedding space while Li et al . ( 2018 ) actively degrades them . This is while DEPN has the same overall performance when measured in superclass accuracy while showing , on average , 125 % improvement in subclass accuracy . There are previous works that might seem similar to our own . Deep k-Nearest-Neighbors ( DkNN ) by Papernot & McDaniel ( 2018 ) is an interpretability-based method for deep neural networks . This method uses nearest-neighbor labels in the intermediate layers of a neural network to assess the validity of a decision . The main idea behind DkNN is that when the network makes a decision , the nearest neighbors in the intermediate layers should also share the same label . If this is not the case for most of the layers , then maybe this decision is something that came up due to the quirks of the final layers of the network . While this focus on nearest neighbors might seem similar to our work , DkNN ultimately offers a confidence score for the decisions of the network . In contrast , our method offers examples ( prototypes ) as explanations for the decisions it takes . In fact , DkNN can be used alongside DEPN to offer confidence scores for its decisions . The work done by Li et al . ( 2018 ) is another method that is quite similar to our work . While the network architecture in both works is mostly the same ( especially if MsDNN-AE is used for the encoder ) , the way they utilize the architecture is quite different . This work is trained end-to-end , but DEPN uses a more gentle two-stage training procedure . The r1 loss used in this work does not consider the prototype labels while the one in DEPN does . This work uses the decoder outputs as the representations of its prototypes while the decoder in DEPN is an optional regularization system for MsDNN . If the decoder is not able to output good representations , this work will not be able to offer proper explanations for its decisions . The explanations in DEPN in contrast , are the nearest same class prototype , which will be represented by a training sample . Another difference is that the explanations offered by this method are much less likely to be of the same subclass as that of DEPN . The work done by Chen et al . ( 2018 ) offers prototype explanations-based on parts of an image . These explanations are inherently different from the sample based explanations DEPN offers . This method is also limited to data that can be split into parts and will not be feasible for other types of data ( like tabular data ) . 2 METHODOLOGY . Our method is a neural network with an uncommon structure suited for prototype training . It consists of a backbone encoder network that transforms the input to a metric space learned based on a nearest-neighbor rule . There is an optional decoder that will be used in an autoencoder which will get its inputs from the embedding space and reconstruct the original input . Then , we have n prototypes which will be modeled as a special layer on top of the encoder . The value of n is a hyperparameter . The larger the n , the more detailed and specific our explanations will become at the expense of more computational cost . Finally , we have a normal fully connected neural network layer that gets the output of the prototype layer and outputs the class likelihoods . This output layer can have a softmax activation function . The special prototype layer in our method gets the transformed samples from the embedding space as input and outputs a vector for each sample denoting the distance between that sample and each prototype in the embedding space . This arrangement was first proposed by Li et al . ( 2018 ) and lets us position the prototypes during the training process of the neural network itself . a schema of our network architecture is illustrated in Figure 2 . 2.1 TRAINING PROCEDURE . The training happens in two steps : The first step is to train the embedding space so that a good embedding space is formed at the output of the encoder . This is done by using the special MsDNN loss ( Chauhan et al. , 2021 ) on the embedding space . MsDNN loss is a variation of triplet loss ( Weinberger & Saul , 2009 ) that uses Gaussian estimations of the overall positive and negative samples as a substitute of the positive and negative samples themselves . It has been shown to be effective in preserving the unseen subclasses of the data and consequently is a good pick for our purpose . Optionally , a decoder that gets the transformed embeddings as inputs and reconstructs the original samples can also be used in this stage of the training , similar to MsDNNAE ( Chauhan et al. , 2021 ) . Training stops when accuracy using nearest-neighbor classification on a validation set is maximized . Our experiments show that unseen subclass accuracy , when measured by nearest-neighbor classification accuracy , generally follows the coarse-grained classification . This lets us optimize our embedding space using only the coarse-grained class data and still get within the general vicinity of the best settings for fine-grained subclass accuracy in the embedding space without having any subclass labels . The choice of hyperparameter σ in MsDNN is important to the viability of the created embedding space . This hyperparameters determines the size of the effective area for weighting the samples . Higher values would result in further samples being considered close neighbors for creating the positive and negative data point for MsDNN ’ s contrastive loss . We found that starting from small values and doubling the value of σ until the peak of coarse-grained validation accuracy to be a good procedure . Extremely large values of σ encourage subclasses to merge . We found out that fine-grained and coarse-grained accuracy follow the same pattern with respect to the value of σ . This means that coarse-grained validation accuracy is generally a good indicator of fine-grained accuracy as well . The second step of the training procedure trains the whole network in an end-to-end manner for classification but with the weights of the encoder network frozen . In essence , what this stage does is to train the prototype and the output layers of the network . The input samples are fed to the encoder and the output of the output layer is used for the final classification . The loss function used in this stage can be seen in equation 1 . L = λ1CEL ( ȳ , y ) + λ2r1 ( e , p ) + λ3r2 ( e , p ) ( 1 ) Where L is the loss value , CEL is the cross-entropy loss , ȳ represents the true coarse-grained labels of the input samples , y is the predicted output of the network , r1 and r2 are special losses defined on the prototype layer , e represents the transformed training samples in the embedding space , p represents the set of learned prototypes in the embedding space , and λ1 , λ2 , λ3 ∈ R+ are weights of each individual loss function . r1 and r2 are defined as : r1 = ∑ ei∈e min pj∈pci ( ‖pj − ei‖22 ) , pci = { pk ∈ p|∃n : yn = yi , ‖pk − en‖2 = minem∈e ( ‖pk − em‖2 ) } ( 2 ) r2 = ∑ pi∈p min ej∈e ( ‖pi − ej‖22 ) ( 3 ) Where ei is an element in e with index i , pci is the set of all prototypes that have the same class label as ei ( determined by the the label of the closest sample in the current batch to that prototype ) and pj is an element of pci . r1 is a loss function that sums up the distance between each input sample in the current batch and its nearest same class prototype . The class label of a prototype in this case is the class label of its nearest sample in the current batch . On the other hand , r2 sums up the distance between each prototype and its closest input sample in the current batch . Minimizing r2 will make sure that each prototype is close to at least one input sample . Minimizing r1 on the other hand makes sure that each sample is close to at least one prototype of the same class . Minimizing these two losses combined makes sure that the prototypes are well-positioned inside the distribution of the data . To increase the speed of the process , the initial position of the prototypes will be chosen as the position of a number of randomly selected training samples . Minimizing the overall loss function will position the prototypes in key positions of the data distribution and will train the output layer to use the distance of samples to each prototype to classify them . It is important to note that the freezing of the weights of the encoder ensures that these loss functions do not determine the embedding space itself and only use it to classify the samples using the prototypes . We found that not freezing the encoder will result in the destruction of the subclass neighborhoods in the embedding space . The stopping criteria in this stage of training are the plateauing of the loss function and the validation accuracy . In our experience , the latter happens sooner than the former . At this stage , the prototypes should be manually moved to the nearest sample in the training set . This sample is called the prototype sample and its class label is used as the class label of that prototype . At this point , the network is ready for inference . An optional procedure after this is to fine-tune the output layer using only cross-entropy loss due to the mentioned moving of the prototypes , but we saw no noticeable change in the output of the network even without this fine-tuning . A schema of the training procedures can be found in Figure 2.1 . | The paper proposes a novel combination of prototype learning and deep nearest neighbor learning in order to achieve an embedding of the input data that is more friendly for prototype learning while maintaining the computational efficiency and intepretability of a prototype approach. In more detail, the approach first trains a Multi-scale deep nearest neighbor network to embed the input data. Then, it jointly learns a prototype layer and a fully connected layer which outputs class logits based on distances to prototypes. Finally, the learned prototypes are replaced by the closest sample from the actual training data to enhance interpretability. In experiments on MNIST, FashionMNIST, CIFAR-10, and ImageNet, the paper shows that the proposed approach outperforms state-of-the-art prototype learning. | SP:71be5d3799276330663155b5c9c04b4a8074e800 |
Toward Faithful Case-based Reasoning through Learning Prototypes in a Nearest Neighbor-friendly Space. | 1 INTRODUCTION . With the ever-increasing usage of deep learning in real-world situations , interpretable and explainable machine learning has become more and more important . This is because the deeply complex nature and large number of parameters in deep learning models raise concerns about reliability , fairness , transparency , and trust . While some research has been focused on explaining the current ” black-box ” models by offering extra insight about their decisions ( Ribeiro et al. , 2016 ; Lundberg & Lee , 2017 ; Selvaraju et al. , 2017 ) , another approach is to create models that are inherently interpretable ( Rudin , 2019 ) . Some classes of machine learning models are deemed to be inherently more interpretable ( Doshi-Velez & Kim , 2017 ) . These include linear models like logistic regression , decision trees , rules-based models , naive bayes , and nearest neighbor models . These models are usually simple or abstract enough that a human can get an understanding of how they make specific decisions . Decisions made by nearest neighbor methods , in particular , can be easily understood by a human . But an important disadvantage of nearest neighbor models is that their inference computational cost scales with the amount of training data . This can make them impractical when the dataset is too large . One of the main ways suggested to combat the cost of nearest neighbor models is to use prototypes . Prototypes are points in the dataset that represent a group of points around them . If the number of prototypes is smaller than the number of data points , using them as a proxy of real data for nearest neighbor methods could significantly speed up the inference process . Depending on the number and the position of prototypes , performance differences compared to using the whole dataset can be minimal . Using prototypes for classification is not a new concept . Research was done on nearest prototype classifiers ( Bezdek & Castelaz , 1977 ; Kohonen , 1990 ; Seo et al. , 2003 ) and their validity ( Kuncheva & Bezdek , 1998 ; Graf et al. , 2009 ) . One important domain where these types of classifiers are used is in interpretable classification ( Bien & Tibshirani , 2011 ; Kim et al. , 2014 ; Chen et al. , 2018 ; Li et al. , 2018 ) . Recently Li et al . ( 2018 ) presented an explainable neural network structure that represents samples by their distance to some prototypes learned in an embedding space . A fully connected layer gets these distances as input and outputs the final class . It learns the prototypes along with the network weights . However , the drawback of that method is that it learns the prototypes in a poorly constructed embedding space . Because the structural similarities in the embedding space are not preserved , prototypes are less likely to be surrounded by a group of similar samples . Since these prototypes are the explanations for the model , naturally one may expect that the prototypes should represent unknown homogenous clusters that might exist within classes ( i.e . subclasses ) . For example different breeds of dogs within the class dog . We empirically show that Li et al . ( 2018 ) fails to preserve such subclasses and even destroys them . We argue that this is due to the lack of a mechanism for learning a proper embedding space in conjugation with learning the prototype positions . In this paper , we offer an interpretable prototype classification method called Deep Embedded Prototype Network ( DEPN ) . This method is a neural network architecture that finds a good embedding space and a number of suitable prototypes for classification . The embedding space and the prototypes in DEPN are constructed in such a way that high classification performance is achieved . In addition , the decisions of the classifier can be explained by showing the nearest prototypes as a comparative example . We show that the combination of our embedding space and prototype selection method leads to cohesive prototype explanations compared to the state-of-the-art Li et al . ( 2018 ) . The embedding space of DEPN is formed such that samples that are closer to each other in terms of Euclidean distance are more similar to each other than samples that are further away . It is because the embedding space is learned based on the nearest-neighbor rule . In particular , following ( Chauhan et al. , 2021 ) an upper bound of the classification error using nearest-neighbour classifier is minimized . While it is very hard to find annotated datasets containing similarity measures between each pair of samples , it is possible to create datasets that contain superclasses each consisting of multiple subclasses . Subclass similarity is , therefore , a good proxy for overall sample similarity . The advantage of ( Chauhan et al. , 2021 ) is that it has been shown to be able to create embedding spaces where even unlabeled subclasses of the data distribution are preserved in the embedding space . This happens despite the model not having any direct knowledge about subclass data . This means that for example , if one of the classes in the data consists of two subclasses , it is less likely to mix these two subclasses together in the resulting embedding space . For example , suppose that the problem is the classification between cats and dogs using only a picture of the animal . If the dataset consists of multiple dog and cat breeds , we can assume that it contains multiple subclasses of cats and dogs . If the nearest neighbors of a dog sample in the embedding space are other dog samples that belong to the same breed ( subclass ) , the closest samples to a prototype might also belong to the same subclass . We believe that if the subclasses of the closest prototype used to classify a new sample is the same as that of that sample , the prototype is a better explanation for the decision taken by the model compared to when the overall class label is correct , but the subclass shows a different breed of the animal . It is important to note that the method we provide does not have access to the subclass data . In fact , it doesn ’ t need the subclass labels to be present at all . The final result is a neural network that classifies the input using a number of prototypes . It can explain its decisions by offering the closest prototype of that class to the human user . Due to the way our embedding space preserves subclass information , the offered explanation is much more likely to be suitable compared to the previous state-of-the-art method Li et al . ( 2018 ) . In particular , our measurements show that our work preserves the subclass neighborhoods in the embedding space while Li et al . ( 2018 ) actively degrades them . This is while DEPN has the same overall performance when measured in superclass accuracy while showing , on average , 125 % improvement in subclass accuracy . There are previous works that might seem similar to our own . Deep k-Nearest-Neighbors ( DkNN ) by Papernot & McDaniel ( 2018 ) is an interpretability-based method for deep neural networks . This method uses nearest-neighbor labels in the intermediate layers of a neural network to assess the validity of a decision . The main idea behind DkNN is that when the network makes a decision , the nearest neighbors in the intermediate layers should also share the same label . If this is not the case for most of the layers , then maybe this decision is something that came up due to the quirks of the final layers of the network . While this focus on nearest neighbors might seem similar to our work , DkNN ultimately offers a confidence score for the decisions of the network . In contrast , our method offers examples ( prototypes ) as explanations for the decisions it takes . In fact , DkNN can be used alongside DEPN to offer confidence scores for its decisions . The work done by Li et al . ( 2018 ) is another method that is quite similar to our work . While the network architecture in both works is mostly the same ( especially if MsDNN-AE is used for the encoder ) , the way they utilize the architecture is quite different . This work is trained end-to-end , but DEPN uses a more gentle two-stage training procedure . The r1 loss used in this work does not consider the prototype labels while the one in DEPN does . This work uses the decoder outputs as the representations of its prototypes while the decoder in DEPN is an optional regularization system for MsDNN . If the decoder is not able to output good representations , this work will not be able to offer proper explanations for its decisions . The explanations in DEPN in contrast , are the nearest same class prototype , which will be represented by a training sample . Another difference is that the explanations offered by this method are much less likely to be of the same subclass as that of DEPN . The work done by Chen et al . ( 2018 ) offers prototype explanations-based on parts of an image . These explanations are inherently different from the sample based explanations DEPN offers . This method is also limited to data that can be split into parts and will not be feasible for other types of data ( like tabular data ) . 2 METHODOLOGY . Our method is a neural network with an uncommon structure suited for prototype training . It consists of a backbone encoder network that transforms the input to a metric space learned based on a nearest-neighbor rule . There is an optional decoder that will be used in an autoencoder which will get its inputs from the embedding space and reconstruct the original input . Then , we have n prototypes which will be modeled as a special layer on top of the encoder . The value of n is a hyperparameter . The larger the n , the more detailed and specific our explanations will become at the expense of more computational cost . Finally , we have a normal fully connected neural network layer that gets the output of the prototype layer and outputs the class likelihoods . This output layer can have a softmax activation function . The special prototype layer in our method gets the transformed samples from the embedding space as input and outputs a vector for each sample denoting the distance between that sample and each prototype in the embedding space . This arrangement was first proposed by Li et al . ( 2018 ) and lets us position the prototypes during the training process of the neural network itself . a schema of our network architecture is illustrated in Figure 2 . 2.1 TRAINING PROCEDURE . The training happens in two steps : The first step is to train the embedding space so that a good embedding space is formed at the output of the encoder . This is done by using the special MsDNN loss ( Chauhan et al. , 2021 ) on the embedding space . MsDNN loss is a variation of triplet loss ( Weinberger & Saul , 2009 ) that uses Gaussian estimations of the overall positive and negative samples as a substitute of the positive and negative samples themselves . It has been shown to be effective in preserving the unseen subclasses of the data and consequently is a good pick for our purpose . Optionally , a decoder that gets the transformed embeddings as inputs and reconstructs the original samples can also be used in this stage of the training , similar to MsDNNAE ( Chauhan et al. , 2021 ) . Training stops when accuracy using nearest-neighbor classification on a validation set is maximized . Our experiments show that unseen subclass accuracy , when measured by nearest-neighbor classification accuracy , generally follows the coarse-grained classification . This lets us optimize our embedding space using only the coarse-grained class data and still get within the general vicinity of the best settings for fine-grained subclass accuracy in the embedding space without having any subclass labels . The choice of hyperparameter σ in MsDNN is important to the viability of the created embedding space . This hyperparameters determines the size of the effective area for weighting the samples . Higher values would result in further samples being considered close neighbors for creating the positive and negative data point for MsDNN ’ s contrastive loss . We found that starting from small values and doubling the value of σ until the peak of coarse-grained validation accuracy to be a good procedure . Extremely large values of σ encourage subclasses to merge . We found out that fine-grained and coarse-grained accuracy follow the same pattern with respect to the value of σ . This means that coarse-grained validation accuracy is generally a good indicator of fine-grained accuracy as well . The second step of the training procedure trains the whole network in an end-to-end manner for classification but with the weights of the encoder network frozen . In essence , what this stage does is to train the prototype and the output layers of the network . The input samples are fed to the encoder and the output of the output layer is used for the final classification . The loss function used in this stage can be seen in equation 1 . L = λ1CEL ( ȳ , y ) + λ2r1 ( e , p ) + λ3r2 ( e , p ) ( 1 ) Where L is the loss value , CEL is the cross-entropy loss , ȳ represents the true coarse-grained labels of the input samples , y is the predicted output of the network , r1 and r2 are special losses defined on the prototype layer , e represents the transformed training samples in the embedding space , p represents the set of learned prototypes in the embedding space , and λ1 , λ2 , λ3 ∈ R+ are weights of each individual loss function . r1 and r2 are defined as : r1 = ∑ ei∈e min pj∈pci ( ‖pj − ei‖22 ) , pci = { pk ∈ p|∃n : yn = yi , ‖pk − en‖2 = minem∈e ( ‖pk − em‖2 ) } ( 2 ) r2 = ∑ pi∈p min ej∈e ( ‖pi − ej‖22 ) ( 3 ) Where ei is an element in e with index i , pci is the set of all prototypes that have the same class label as ei ( determined by the the label of the closest sample in the current batch to that prototype ) and pj is an element of pci . r1 is a loss function that sums up the distance between each input sample in the current batch and its nearest same class prototype . The class label of a prototype in this case is the class label of its nearest sample in the current batch . On the other hand , r2 sums up the distance between each prototype and its closest input sample in the current batch . Minimizing r2 will make sure that each prototype is close to at least one input sample . Minimizing r1 on the other hand makes sure that each sample is close to at least one prototype of the same class . Minimizing these two losses combined makes sure that the prototypes are well-positioned inside the distribution of the data . To increase the speed of the process , the initial position of the prototypes will be chosen as the position of a number of randomly selected training samples . Minimizing the overall loss function will position the prototypes in key positions of the data distribution and will train the output layer to use the distance of samples to each prototype to classify them . It is important to note that the freezing of the weights of the encoder ensures that these loss functions do not determine the embedding space itself and only use it to classify the samples using the prototypes . We found that not freezing the encoder will result in the destruction of the subclass neighborhoods in the embedding space . The stopping criteria in this stage of training are the plateauing of the loss function and the validation accuracy . In our experience , the latter happens sooner than the former . At this stage , the prototypes should be manually moved to the nearest sample in the training set . This sample is called the prototype sample and its class label is used as the class label of that prototype . At this point , the network is ready for inference . An optional procedure after this is to fine-tune the output layer using only cross-entropy loss due to the mentioned moving of the prototypes , but we saw no noticeable change in the output of the network even without this fine-tuning . A schema of the training procedures can be found in Figure 2.1 . | The paper introduces a novel an interpretable neural network for classification tasks that works by assigning input cases to prototypical cases based on their euclidean distance in an embedding space learned by the network. This is achieved by minimizing a loss function consisting of three components: cross entropy loss; distance between each case and the prototype with the same class, distance between each prototype and the closest case to it. The proposed method is specially useful when classification at fine-grained level is performed where in addition to the class to which an instance belongs, predicting the subclass is desirable as well. Experimental results are provided in the paper that validates the merits of the method compared to a set of variations of the proposed method and a state of the art approach as well. | SP:71be5d3799276330663155b5c9c04b4a8074e800 |
Manifold Micro-Surgery with Linearly Nearly Euclidean Metrics | 1 INTRODUCTION . In general relativity ( Wald , 2010 ) , a complete Riemannian manifold ( M , g ) endowed with a linearly nearly flat spacetime metric gij is considered for linearized gravity to solve the Newtonian limit . The form of this metric is gij = ηij + γij , where ηij represents a flat Minkowski metric whose background is special relativity and γij is small from ηij . An adequate definition of “ smallness ” in this context is that the components of γij are much smaller than 1 in some global inertial coordinate system of ηij . Now , let us step out of the physical world and give a similar metric gij = δij + γij in Riemannian n-manifold ( Mn , g ) , i.e . the linearly nearly Euclidean metric , where δij represents a flat Euclidean metric and γij is small from δij . A natural problem for such a linearly nearly Euclidean metric is : how does the metric evolve over time with respect to the Ricci flow while ensuring the constant topological structure ? Let us review some stability analyses of different manifolds along with the Ricci flow . For the Riemannian n-dimensional manifold ( Mn , g ) that is isometric to the Euclidean ndimensional space ( Rn , δ ) , Schnürer et al . ( Schnürer et al. , 2007 ) have showed the stability of Euclidean space under the Ricci flow for a small C0 perturbation . Koch et al . ( Koch & Lamm , 2012 ) have given the stability of the Euclidean space along with the Ricci flow in the L∞-Norm . Moreover , for the decay of the L∞-Norm on Euclidean space , Appleton ( Appleton , 2018 ) has given the proof of a different method . Considering the stability of integrable and closed Ricci-flat metrics , Sesum ( Sesum , 2006 ) has proved that the convergence rate is exponential because the spectrum of the Lichnerowicz operator is discrete . Furthermore , Deruelle et al . ( Deruelle & Kröncke , 2021 ) have proved that an asymptotically locally Euclidean Ricci-flat metric is dynamically stable under the Ricci flow , for an L2 ∩ L∞ perturbation on non-flat and non-compact Ricci-flat manifolds . If we embed a Riemannian n-dimensional manifold in the neural network , then we are training the neural network on the dynamic manifold . The most famous method for training neural networks on manifolds is the natural gradient ( Amari , 1998 ) . However , for the Riemannian manifold corresponding to the KL divergence ( representing the natural gradient ) , its stability and convergence are still unknown ( Martens , 2020 ) . As a way of manifold evolution , Ricci flow seems to be an excellent choice to ensure that neural networks are trained on dynamic and stable manifolds ( Glass et al. , 2020 ; Jejjala et al. , 2020 ) . But the research on the relationship between the two has not yet sprouted . In this paper , we consider a complete Riemannian n-dimensional manifold ( Mn , g ) endowed with linearly nearly Euclidean metrics g ( t ) = δ + γ ( t ) . First of all , we prove the stability of linearly nearly Euclidean manifolds under the Ricci-DeTurck flow in the L2-Norm if initial metrics are integrable and linearly stable , i.e . has a manifold structure of finite dimension . We mean that any Ricci-DeTurck flow which starts from near g exists for all time and converges to a linearly nearly Euclidean metric near g. Note that we use the Einstein summation convention and denote generic constants by C or C1 . Furthermore , we define and construct linearly nearly Euclidean manifolds based on information geometry and mirror descent algorithm . Based on a symmetrized convex function , we obtain the linearly nearly Euclidean divergence which is used to calculate the steepest descent gradient in linearly nearly Euclidean manifolds . Experimentally , when we use the approximated steepest descent gradient flow to learn several neural networks on classification tasks , we observe the evolution of its metric is consistent with the micro-surgery process under the Ricci-DeTurck flow . 2 RICCI FLOW . Let us introduce a partial differential equation , the Ricci flow , without explanation . The concept of the Ricci flow first published by Hamilton ( Hamilton et al. , 1982 ) on the manifold M of a timedependent Riemannian metric g ( t ) with the initial metric g0 : ∂ ∂t g ( t ) = −2 Ric ( g ( t ) ) g ( 0 ) = g0 ( 1 ) where Ric denotes the Ricci curvature tensor whose definition can be found in Appendix A . The purpose of the Ricci flow is to prove Thurston ’ s Geometrization Conjecture and Poincaré Conjecture because the Ricci flow is like a surgical scalpel , trimming irregular manifolds into regular manifolds to facilitate observation and discussion ( Sheridan & Rubinstein , 2006 ) . In general , in order to possess good geometric and topological properties , we expect the metric to become converge and round with the help of the Ricci flow . “ become round ” means that the solution will not shrink to a point but converge to a constant circle . However , in most cases , we do not even know the convergence of the solution and whether the solution will develop a singularity . Next , we will discuss these issues for brevity . 2.1 SHORT TIME EXISTENCE . To show that there exists a unique solution for a short time , we must check if the system of the Ricci flow is strongly parabolic . Theorem 1 When u : M× [ 0 , T ) → E is a time-dependent section of the vector bundle E whereM is some Riemannian manifold , if the system of the Ricci flow is strongly parabolic at u0 then there exists a solution on some time interval [ 0 , T ) , and the solution is unique for as long as it exists . Proof . The proofs can be found in ( Ladyzhenskaia et al. , 1988 ) . Definition 1 The Ricci flow is strongly parabolic if there exists δ > 0 such that for all covectors ϕ 6= 0 and all symmetric hij = ∂gij ( t ) ∂t 6= 0 , the principal symbol of −2 Ric satisfies [ −2 Ric ] ( ϕ ) ( h ) ijhij = gpq ( ϕpϕqhij + ϕiϕjhpq − ϕqϕihjp − ϕqϕjhip ) hij > δϕkϕkhrshrs . Since the inequality can not always be satisfied , the Ricci flow is not strongly parabolic , which makes us unable to prove the existence of the solution based on Theorem 1 . It is possible to understand which parts have an impact on its non-parabolic by the linearization of the Ricci curvature tensor . Lemma 1 The linearization of −2 Ric can be rewritten as D [ −2 Ric ] ( h ) ij = gpq∇p∇qhij +∇iVj +∇jVi +O ( hij ) where Vi = g pq ( 1 2 ∇ihpq −∇qhpi ) . ( 2 ) Proof . The proofs can be found in Appendix B.1 . The term O ( hij ) will have no contribution to the principal symbol of −2 Ric , so ignoring it will not affect our discussion of this problem . By carefully observing the above equation , one finds that the impact on the non-parabolic of the Ricci flow comes from the terms in V , not the term gpq∇p∇qhij . The solution is followed by the DeTurck Trick ( DeTurck , 1983 ) that has a timedependent reparameterization of the manifold : ∂ ∂t ḡ ( t ) = −2 Ric ( ḡ ( t ) ) − L ∂ϕ ( t ) ∂t ḡ ( t ) ḡ ( 0 ) = ḡ0 + d , ( 3 ) where d is a symmetric ( 0,2 ) -tensor on M. See Appendix B.2 for details . By choosing ∂ϕ ( t ) ∂t to cancel the effort of the terms in V , the reparameterized Ricci flow is strongly parabolic . Thus , one can say that the Ricci-DeTurck flow has a unique solution , the pullback metric , for a short time . 2.2 CURVATURE EXPLOSION AT SINGULARITY . In this subsection , we will present the behavior of the Ricci flow in finite time and show that the evolution of the curvature is close to divergence . The core demonstration is followed with Theorem 4 , which requires some other proof as a foreshadowing . Theorem 2 Given a smooth Riemannian metric g0 on a closed manifoldM , there exists a maximal time interval [ 0 , T ) such that a solution g ( t ) of the Ricci flow , with g ( 0 ) = g0 , exists and is smooth on [ 0 , T ) , and this solution is unique . Proof . The proofs can be found in ( Sheridan & Rubinstein , 2006 ) . Theorem 3 LetM be a closed manifold and g ( t ) a smooth time-dependent metric onM , defined for t ∈ [ 0 , T ) . If there exists a constant C < ∞ for all x ∈M such that∫ T 0 ∣∣∣∣ ∂∂tgx ( t ) ∣∣∣∣ g ( t ) dt ≤ C , ( 4 ) then the metrics g ( t ) converge uniformly as t approaches T to a continuous metric g ( T ) that is uniformly equivalent to g ( 0 ) and satisfies e−Cgx ( 0 ) ≤ gx ( T ) ≤ eCgx ( 0 ) . Proof . The proofs can be found in Appendix B.3 . Corollary 1 Let ( M , g ( t ) ) be a solution of the Ricci flow on a closed manifold . If |Rm |g ( t ) is bounded on a finite time [ 0 , T ) , then g ( t ) converges uniformly as t approaches T to a continuous metric g ( T ) which is uniformly equivalent to g ( 0 ) . Proof . The bound on |Rm |g ( t ) implies one on |Ric |g ( t ) . Based on Equation ( 1 ) , we can extend the bound on | ∂∂tg ( t ) |g ( t ) . Therefore , we obtain an integral of a bounded quantity over a finite interval is also bounded , by Theorem 3 . Theorem 4 If g0 is a smooth metric on a compact manifoldM , the Ricci flow with g ( 0 ) = g0 has a unique solution g ( t ) on a maximal time interval t ∈ [ 0 , T ) . If T < ∞ , then lim t→T ( sup x∈M |Rmx ( t ) | ) =∞ . ( 5 ) Proof . For a contradiction , we assume that |Rmx ( t ) | is bounded by a constant . It follows from Corollary 1 that the metrics g ( t ) converge uniformly in the norm induced by g ( t ) to a smooth metric g ( T ) . Based on Theorem 2 , it is possible to find a solution to the Ricci flow on t ∈ [ 0 , T ) because the smooth metric g ( T ) is uniformly equivalent to initial metric g ( 0 ) . Hence , one can extend the solution of the Ricci flow after the time point t = T , which is the result for continuous derivatives at t = T . This tell us that the time T of existence of the Ricci flow has not been maximal , which contradicts our assumption . In other words , |Rmx ( t ) | is unbounded . As approaching the singular time T , the Riemann curvature |Rm |g ( t ) becomes no longer convergent and tends to explode . | The paper concerns the Ricci flow and surgery to handle singularities occuring during the Ricci flow. The authors propose linearly nearly Euclidean metrics that they prove are stable under the Ricii-DeTurck flow. The authors use this to approximate steepest descent gradient flows in information geometry and state that they obtain a new method for geometric optimization on manifolds. | SP:21921e3032fd96cb1ed6c602cfe914ba15097c7f |
Manifold Micro-Surgery with Linearly Nearly Euclidean Metrics | 1 INTRODUCTION . In general relativity ( Wald , 2010 ) , a complete Riemannian manifold ( M , g ) endowed with a linearly nearly flat spacetime metric gij is considered for linearized gravity to solve the Newtonian limit . The form of this metric is gij = ηij + γij , where ηij represents a flat Minkowski metric whose background is special relativity and γij is small from ηij . An adequate definition of “ smallness ” in this context is that the components of γij are much smaller than 1 in some global inertial coordinate system of ηij . Now , let us step out of the physical world and give a similar metric gij = δij + γij in Riemannian n-manifold ( Mn , g ) , i.e . the linearly nearly Euclidean metric , where δij represents a flat Euclidean metric and γij is small from δij . A natural problem for such a linearly nearly Euclidean metric is : how does the metric evolve over time with respect to the Ricci flow while ensuring the constant topological structure ? Let us review some stability analyses of different manifolds along with the Ricci flow . For the Riemannian n-dimensional manifold ( Mn , g ) that is isometric to the Euclidean ndimensional space ( Rn , δ ) , Schnürer et al . ( Schnürer et al. , 2007 ) have showed the stability of Euclidean space under the Ricci flow for a small C0 perturbation . Koch et al . ( Koch & Lamm , 2012 ) have given the stability of the Euclidean space along with the Ricci flow in the L∞-Norm . Moreover , for the decay of the L∞-Norm on Euclidean space , Appleton ( Appleton , 2018 ) has given the proof of a different method . Considering the stability of integrable and closed Ricci-flat metrics , Sesum ( Sesum , 2006 ) has proved that the convergence rate is exponential because the spectrum of the Lichnerowicz operator is discrete . Furthermore , Deruelle et al . ( Deruelle & Kröncke , 2021 ) have proved that an asymptotically locally Euclidean Ricci-flat metric is dynamically stable under the Ricci flow , for an L2 ∩ L∞ perturbation on non-flat and non-compact Ricci-flat manifolds . If we embed a Riemannian n-dimensional manifold in the neural network , then we are training the neural network on the dynamic manifold . The most famous method for training neural networks on manifolds is the natural gradient ( Amari , 1998 ) . However , for the Riemannian manifold corresponding to the KL divergence ( representing the natural gradient ) , its stability and convergence are still unknown ( Martens , 2020 ) . As a way of manifold evolution , Ricci flow seems to be an excellent choice to ensure that neural networks are trained on dynamic and stable manifolds ( Glass et al. , 2020 ; Jejjala et al. , 2020 ) . But the research on the relationship between the two has not yet sprouted . In this paper , we consider a complete Riemannian n-dimensional manifold ( Mn , g ) endowed with linearly nearly Euclidean metrics g ( t ) = δ + γ ( t ) . First of all , we prove the stability of linearly nearly Euclidean manifolds under the Ricci-DeTurck flow in the L2-Norm if initial metrics are integrable and linearly stable , i.e . has a manifold structure of finite dimension . We mean that any Ricci-DeTurck flow which starts from near g exists for all time and converges to a linearly nearly Euclidean metric near g. Note that we use the Einstein summation convention and denote generic constants by C or C1 . Furthermore , we define and construct linearly nearly Euclidean manifolds based on information geometry and mirror descent algorithm . Based on a symmetrized convex function , we obtain the linearly nearly Euclidean divergence which is used to calculate the steepest descent gradient in linearly nearly Euclidean manifolds . Experimentally , when we use the approximated steepest descent gradient flow to learn several neural networks on classification tasks , we observe the evolution of its metric is consistent with the micro-surgery process under the Ricci-DeTurck flow . 2 RICCI FLOW . Let us introduce a partial differential equation , the Ricci flow , without explanation . The concept of the Ricci flow first published by Hamilton ( Hamilton et al. , 1982 ) on the manifold M of a timedependent Riemannian metric g ( t ) with the initial metric g0 : ∂ ∂t g ( t ) = −2 Ric ( g ( t ) ) g ( 0 ) = g0 ( 1 ) where Ric denotes the Ricci curvature tensor whose definition can be found in Appendix A . The purpose of the Ricci flow is to prove Thurston ’ s Geometrization Conjecture and Poincaré Conjecture because the Ricci flow is like a surgical scalpel , trimming irregular manifolds into regular manifolds to facilitate observation and discussion ( Sheridan & Rubinstein , 2006 ) . In general , in order to possess good geometric and topological properties , we expect the metric to become converge and round with the help of the Ricci flow . “ become round ” means that the solution will not shrink to a point but converge to a constant circle . However , in most cases , we do not even know the convergence of the solution and whether the solution will develop a singularity . Next , we will discuss these issues for brevity . 2.1 SHORT TIME EXISTENCE . To show that there exists a unique solution for a short time , we must check if the system of the Ricci flow is strongly parabolic . Theorem 1 When u : M× [ 0 , T ) → E is a time-dependent section of the vector bundle E whereM is some Riemannian manifold , if the system of the Ricci flow is strongly parabolic at u0 then there exists a solution on some time interval [ 0 , T ) , and the solution is unique for as long as it exists . Proof . The proofs can be found in ( Ladyzhenskaia et al. , 1988 ) . Definition 1 The Ricci flow is strongly parabolic if there exists δ > 0 such that for all covectors ϕ 6= 0 and all symmetric hij = ∂gij ( t ) ∂t 6= 0 , the principal symbol of −2 Ric satisfies [ −2 Ric ] ( ϕ ) ( h ) ijhij = gpq ( ϕpϕqhij + ϕiϕjhpq − ϕqϕihjp − ϕqϕjhip ) hij > δϕkϕkhrshrs . Since the inequality can not always be satisfied , the Ricci flow is not strongly parabolic , which makes us unable to prove the existence of the solution based on Theorem 1 . It is possible to understand which parts have an impact on its non-parabolic by the linearization of the Ricci curvature tensor . Lemma 1 The linearization of −2 Ric can be rewritten as D [ −2 Ric ] ( h ) ij = gpq∇p∇qhij +∇iVj +∇jVi +O ( hij ) where Vi = g pq ( 1 2 ∇ihpq −∇qhpi ) . ( 2 ) Proof . The proofs can be found in Appendix B.1 . The term O ( hij ) will have no contribution to the principal symbol of −2 Ric , so ignoring it will not affect our discussion of this problem . By carefully observing the above equation , one finds that the impact on the non-parabolic of the Ricci flow comes from the terms in V , not the term gpq∇p∇qhij . The solution is followed by the DeTurck Trick ( DeTurck , 1983 ) that has a timedependent reparameterization of the manifold : ∂ ∂t ḡ ( t ) = −2 Ric ( ḡ ( t ) ) − L ∂ϕ ( t ) ∂t ḡ ( t ) ḡ ( 0 ) = ḡ0 + d , ( 3 ) where d is a symmetric ( 0,2 ) -tensor on M. See Appendix B.2 for details . By choosing ∂ϕ ( t ) ∂t to cancel the effort of the terms in V , the reparameterized Ricci flow is strongly parabolic . Thus , one can say that the Ricci-DeTurck flow has a unique solution , the pullback metric , for a short time . 2.2 CURVATURE EXPLOSION AT SINGULARITY . In this subsection , we will present the behavior of the Ricci flow in finite time and show that the evolution of the curvature is close to divergence . The core demonstration is followed with Theorem 4 , which requires some other proof as a foreshadowing . Theorem 2 Given a smooth Riemannian metric g0 on a closed manifoldM , there exists a maximal time interval [ 0 , T ) such that a solution g ( t ) of the Ricci flow , with g ( 0 ) = g0 , exists and is smooth on [ 0 , T ) , and this solution is unique . Proof . The proofs can be found in ( Sheridan & Rubinstein , 2006 ) . Theorem 3 LetM be a closed manifold and g ( t ) a smooth time-dependent metric onM , defined for t ∈ [ 0 , T ) . If there exists a constant C < ∞ for all x ∈M such that∫ T 0 ∣∣∣∣ ∂∂tgx ( t ) ∣∣∣∣ g ( t ) dt ≤ C , ( 4 ) then the metrics g ( t ) converge uniformly as t approaches T to a continuous metric g ( T ) that is uniformly equivalent to g ( 0 ) and satisfies e−Cgx ( 0 ) ≤ gx ( T ) ≤ eCgx ( 0 ) . Proof . The proofs can be found in Appendix B.3 . Corollary 1 Let ( M , g ( t ) ) be a solution of the Ricci flow on a closed manifold . If |Rm |g ( t ) is bounded on a finite time [ 0 , T ) , then g ( t ) converges uniformly as t approaches T to a continuous metric g ( T ) which is uniformly equivalent to g ( 0 ) . Proof . The bound on |Rm |g ( t ) implies one on |Ric |g ( t ) . Based on Equation ( 1 ) , we can extend the bound on | ∂∂tg ( t ) |g ( t ) . Therefore , we obtain an integral of a bounded quantity over a finite interval is also bounded , by Theorem 3 . Theorem 4 If g0 is a smooth metric on a compact manifoldM , the Ricci flow with g ( 0 ) = g0 has a unique solution g ( t ) on a maximal time interval t ∈ [ 0 , T ) . If T < ∞ , then lim t→T ( sup x∈M |Rmx ( t ) | ) =∞ . ( 5 ) Proof . For a contradiction , we assume that |Rmx ( t ) | is bounded by a constant . It follows from Corollary 1 that the metrics g ( t ) converge uniformly in the norm induced by g ( t ) to a smooth metric g ( T ) . Based on Theorem 2 , it is possible to find a solution to the Ricci flow on t ∈ [ 0 , T ) because the smooth metric g ( T ) is uniformly equivalent to initial metric g ( 0 ) . Hence , one can extend the solution of the Ricci flow after the time point t = T , which is the result for continuous derivatives at t = T . This tell us that the time T of existence of the Ricci flow has not been maximal , which contradicts our assumption . In other words , |Rmx ( t ) | is unbounded . As approaching the singular time T , the Riemann curvature |Rm |g ( t ) becomes no longer convergent and tends to explode . | This paper investigates linearly nearly Euclidean metrics on Riemannian manifolds under the Ricci flow. Analysis, focusing on the stability and convergence of the evolution of such metrics, is presented. In addition, the utility of the analysis for measuring the approximation of gradient flow in the context of training neural networks is demonstrated. | SP:21921e3032fd96cb1ed6c602cfe914ba15097c7f |
Resolving Training Biases via Influence-based Data Relabeling | 1 INTRODUCTION . Training data plays an inevitably important role in delivering the model ’ s final performance . It has been well recognized that the training bias issue will compromise model performance to a large extent ( Arpit et al. , 2017 ) . Specifically , there are two major scenarios where training biases show up . The first and most common scenario is that training samples involve corrupted labels that could be originated at possibly every step of the data lifecycle ( Anderson & McGrew , 2017 ; Dolatshah et al. , 2018 ; Pei et al. , 2020 ; Yu & Qin , 2020 ) . The second scenario is that the training and test sets are sampled from the respective distributions Ptrain ( x , y ) and Ptest ( x , y ) , but Ptrain is different from Ptest ( Guo et al. , 2020 ; He & Garcia , 2009 ; Zou et al. , 2019 ) . Both corrupted labels and distribution mismatch will hurt the generalization ability of a trained model ( Fang et al. , 2020 ; Zhang et al. , 2017 ) . We generally refer to training samples with corrupted labels or those inducing distribution mismatch as harmful samples . Data resampling is a widely used strategy to deal with harmful training samples . Existing resampling approaches ( Chawla et al. , 2002 ; Mahmoody et al. , 2016 ; Malach & Shalev-Shwartz , 2017 ; Ren et al. , 2018 ) propose to assign different weights to training samples , which aim to mitigate the negative impacts of harmful samples on model ’ s generalization ability . Among them , most resampling approaches ( Arazo et al. , 2019 ; Han et al. , 2018 ; Li et al. , 2020 ; Wang et al. , 2020a ) rely on training loss to identify harmful samples from the whole training set . They follow the insight that the samples with higher training losses are very likely to have corrupted labels , and hence it is often beneficial to downweight them during the process of model training . However , such loss-based resampling methods have two limitations . First , they are only able to deal with the training biases caused by training samples with corrupted labels ( aka noisy samples ) . Second , the small-loss trick typically holds true for deep models but not for any predictive models ( Zhang et al. , 2017 ) . To address the limitations , one recent work ( Wang et al. , 2020b ) proposes a new resampling scheme based on influence functions ( Cook & Weisberg , 1980 ) . The idea is to estimate the influence of each training sample on model ’ s predictions over the test set . Any training samples that would cause an increase in the test loss are considered as harmful and will be downweighted afterwards . It is worth mentioning that the influence functions have been proved to deal with two forms of training biases effectively , and it is agnostic to a specific model or data type ( Koh & Liang , 2017 ; Koh et al. , 2019 ) . Inspired by the success of influence-based data resampling , in this paper , we would like to ask the following question : what would happen if we relabel harmful training data based on influence analysis results ? Our motivations on performing data relabeling via influence analysis are twofold . ( i ) Relabeling noisy samples is able to prevent the model from memorizing the corrupted labels . ( ii ) Relabeling clean but biased samples is helpful to improve model ’ s robustness to harmful samples . Despite the potential benefits of data relabeling , it is still challenging to develop an influence-based relabeling approach that has a theoretical guarantee on the model ’ s performance improvement after training with relabeled data . To answer the question , we first follow ( Koh et al. , 2019 ) to measure the influence of each training sample on model ’ s predictions and identify the harmful training samples which would cause an increase in the test loss . Next , we investigate whether relabeling the identified harmful samples rather than discarding them can improve the test performance . To achieve this , we start from binary classification tasks where relabeling a training sample is to convert its binary label from y to 1− y . We theoretically prove that relabeling harmful training data via influence analysis can achieve lower test loss than simply discarding them for binary classification . Furthermore , we design a novel relabeling function R for multi-class classification tasks and prove that the advantage of relabeling the identified harmful samples using R in reducing model ’ s test loss still holds . Following the influence-based resampling approaches ( Wang et al. , 2018 ; Ting & Brochu , 2018 ; Ren et al. , 2020 ; Wang et al. , 2020b ) , we only use the test loss for theoretical analysis and empirically calculate influence function with a small but unbiased validation set by assuming the validation set is sampled from the same distribution as the test set . In this way , using the validation loss to calculate the influence function is an unbiased estimation of the true influence function . Otherwise , the problem may lie in the category of transfer learning which is beyond the scope of this work . To summarize , this work makes the following contributions . First , we propose to combine influence functions with data relabeling for reducing training biases and we develop an end-to-end influencebased relabeling framework named RDIA that reuses harmful training samples toward better model performance . Second , we design a novel relabeling function R and theoretically prove that applying R over harmful training samples identified by influence functions is able to achieve lower test loss for any classification tasks using cross-entropy loss function . Third , we conduct extensive experiments on real datasets in different domains . The results demonstrate that ( i ) RDIA is effective in reusing harmful training samples towards higher model performance , surpassing the existing influence-based resampling approaches , and ( ii ) RDIA improves model ’ s robustness to label noise , outperforming the current resampling methods by large margins . 2 BACKGROUND . Let D = { ( xi , yi ) ∈ X × Y | 1 ≤ i ≤ N } be the training set which are sampled from Ptrain ( x , y ) . Let zi = ( xi , yi ) where xi ∈ Rd and yi ∈ RK . Let ϕ ( x , θ ) be a model ’ s prediction for x , where θ ∈ Rp is the parameter set of the model . We denote the loss of sample zi by l ( zi , θ ) = L ( yi , ϕ ( xi , θ ) ) and use li ( θ ) to represent l ( zi , θ ) . We consider the standard empirical risk minimization ( ERM ) as the optimization objective . Formally , the empirical risk over D is defined as : L ( D ; θ ) = 1N ∑N i=1 li ( θ ) . Since our relabeling function is dependent to the loss function , we focus on the most effective and versatile loss , i.e. , Cross Entropy loss for any classification tasks . Influence functions . Influence functions , stemming from Robust Statistics ( Huber , 2004 ) , have provided an efficient way to estimate how a small perturbation of a training sample would change the model ’ s predictions ( Koh & Liang , 2017 ; Koh et al. , 2019 ; Yu et al. , 2020 ) . Let θ̂ = arg minθ 1 N ∑N n=1 ln ( θ ) be the optimal model parameters on convergence . When upweighting a training sample zi on its loss term by an infinitesimal step i , we obtain the new optimal parameters θ̂ i on convergence as : θ̂ i = arg min θ 1 N ∑N n=1 ln ( θi ) + ili ( θ ) . Based on influence functions ( Cook & Weisberg , 1980 ; Koh & Liang , 2017 ) , we have the following closed-form expression to estimate the change in model parameters when upweighting zi by i : ψθ ( zi ) , dθ̂ i d i | i=0= −H −1 θ̂ Oθli ( θ̂ ) , ( 1 ) whereHθ̂ , 1 N ∑N n=1 O 2 θln ( θ̂ ) is the Hessian matrix and O 2 θln ( θ ) is the second derivative of the loss at training point zn with respect to θ . Using the chain rule , we can estimate the change of model ’ s prediction at a test data zcj sampled from the given test distribution Ptest ( Koh & Liang , 2017 ) : Φθ ( zi , z c j ) , dlj ( θ̂ i ) d i | i=0= −Oθlj ( θ̂ ) H −1 θ̂ Oθli ( θ̂ ) . ( 2 ) At a fine-grained level , we can measure the influence of perturbing training sample zi from ( xi , yi ) to ( xi , yi + δ ) . Let ziδ = ( xi , yi + δ ) and the new loss li ( ziδ , θ ) = L ( yi + δ , ϕ ( xi , θ ) ) . According to ( Koh & Liang , 2017 ) , the optimal parameters θ̂ iδi after performing perturbation on zi is θ̂ iδi = arg min θ 1 N ∑N n=1 ln ( θ ) + ili ( ziδ , θ ) − ili ( θ ) . This allows us to estimate the change in model parameters after the fine-grained data perturbation using influence functions as : dθ̂ iδi d i | i=0 = ψθ ( ziδ ) − ψθ ( zi ) = −H−1 θ̂ ( Oθli ( ziδ , θ̂ ) − Oθli ( θ̂ ) ) . ( 3 ) Further , the influence of perturbing zi by ziδ on model ’ s prediction at test sample zcj is the following : ηθδ ( zi , z c j ) , dlj ( θ̂ iδi ) d i | i=0 = −Oθlj ( θ̂ ) H−1θ̂ ( Oθli ( ziδ , θ̂ ) − Oθli ( θ̂ ) ) . ( 4 ) It is important to notice that Eq . ( 4 ) holds for arbitrary δ when i is approaching 0 . This provides the feasibility of measuring how relabeling a training samples could influence the model ’ s predictions . Influence-based resampling approaches . Previous researches ( Koh & Liang , 2017 ; Wang et al. , 2020b ) have shown that influence functions have strong ability to identify harmful samples from the whole training set , which is agnostic to the specific model or data structure . Inspired by this , many influence-based resampling approaches ( Ting & Brochu , 2018 ; Wang et al. , 2018 ; 2020b ) proposed to discard or downweight the identified harmful samples to reduce the test loss . However , different from previous works which focus on estimating the influence of each training sample on the test performance using Eq . ( 1 ) - ( 2 ) , we perform the fine-grained perturbation on a training sample ’ s label and evaluate its influence using Eq . ( 3 ) - ( 4 ) . Further , our work tries to build an end-to-end influence-based relabeling framework to reuse the harmful samples with a theoretical guarantee on the final model performance for any classification tasks . To be specific , we demonstrate that harmful training instances after being relabeled properly do make contributions to improve the final model performance , which provides a novel viewpoint on handling biased training data . | This paper proposes a data relabeling technique using influence function. The authors first derived the influence function for data relabeling as follows. $$ \eta_{\theta \delta}\left(z\_i, z\_j^{c}\right) \left. \triangleq \frac{d l\_j^c \left(\hat{\theta}\_{\epsilon\_i \delta\_j}\right)}{d \epsilon\_i}\right|_{c\_i=0} =-\nabla\_{\theta} l\_j^c(\hat{\theta}) H\_{\hat{\theta}}^{-1}\left(\nabla\_\theta l\_i\left(z\_{i \delta}, \hat{\theta}\right)-\nabla\_\theta l\_i(\hat{\theta})\right) $$ The authors then derived the specific expression for the cross-entropy loss: $$ \eta_{\theta R}\left(z_{i}, z_{j}^{c}\right)=-\nabla_{\theta} l_{j}(\hat{\theta}) H_{\hat{\theta}}^{-1} w\left(z_{i}, \hat{\theta}\right)=-\nabla_{\theta} l_{j}(\hat{\theta}) H_{\hat{\theta}}^{-1}\left(-\frac{\nabla_{\theta} l_{i}(\hat{\theta})}{1-\varphi\left(x_{i}, \hat{\theta}\right)}\right)=\frac{-\Phi_{\theta}\left(z_{i}, z_{j}^{c}\right)}{1-\varphi\left(x_{i}, \hat{\theta}\right)} $$ The authors experimentally demonstrated that data relabeling is more effective than the standard sampling/data reweighting-based approach for model improvement. | SP:508e62e7f8cd002f2ee01c2ba032ce79a1f3a469 |
Resolving Training Biases via Influence-based Data Relabeling | 1 INTRODUCTION . Training data plays an inevitably important role in delivering the model ’ s final performance . It has been well recognized that the training bias issue will compromise model performance to a large extent ( Arpit et al. , 2017 ) . Specifically , there are two major scenarios where training biases show up . The first and most common scenario is that training samples involve corrupted labels that could be originated at possibly every step of the data lifecycle ( Anderson & McGrew , 2017 ; Dolatshah et al. , 2018 ; Pei et al. , 2020 ; Yu & Qin , 2020 ) . The second scenario is that the training and test sets are sampled from the respective distributions Ptrain ( x , y ) and Ptest ( x , y ) , but Ptrain is different from Ptest ( Guo et al. , 2020 ; He & Garcia , 2009 ; Zou et al. , 2019 ) . Both corrupted labels and distribution mismatch will hurt the generalization ability of a trained model ( Fang et al. , 2020 ; Zhang et al. , 2017 ) . We generally refer to training samples with corrupted labels or those inducing distribution mismatch as harmful samples . Data resampling is a widely used strategy to deal with harmful training samples . Existing resampling approaches ( Chawla et al. , 2002 ; Mahmoody et al. , 2016 ; Malach & Shalev-Shwartz , 2017 ; Ren et al. , 2018 ) propose to assign different weights to training samples , which aim to mitigate the negative impacts of harmful samples on model ’ s generalization ability . Among them , most resampling approaches ( Arazo et al. , 2019 ; Han et al. , 2018 ; Li et al. , 2020 ; Wang et al. , 2020a ) rely on training loss to identify harmful samples from the whole training set . They follow the insight that the samples with higher training losses are very likely to have corrupted labels , and hence it is often beneficial to downweight them during the process of model training . However , such loss-based resampling methods have two limitations . First , they are only able to deal with the training biases caused by training samples with corrupted labels ( aka noisy samples ) . Second , the small-loss trick typically holds true for deep models but not for any predictive models ( Zhang et al. , 2017 ) . To address the limitations , one recent work ( Wang et al. , 2020b ) proposes a new resampling scheme based on influence functions ( Cook & Weisberg , 1980 ) . The idea is to estimate the influence of each training sample on model ’ s predictions over the test set . Any training samples that would cause an increase in the test loss are considered as harmful and will be downweighted afterwards . It is worth mentioning that the influence functions have been proved to deal with two forms of training biases effectively , and it is agnostic to a specific model or data type ( Koh & Liang , 2017 ; Koh et al. , 2019 ) . Inspired by the success of influence-based data resampling , in this paper , we would like to ask the following question : what would happen if we relabel harmful training data based on influence analysis results ? Our motivations on performing data relabeling via influence analysis are twofold . ( i ) Relabeling noisy samples is able to prevent the model from memorizing the corrupted labels . ( ii ) Relabeling clean but biased samples is helpful to improve model ’ s robustness to harmful samples . Despite the potential benefits of data relabeling , it is still challenging to develop an influence-based relabeling approach that has a theoretical guarantee on the model ’ s performance improvement after training with relabeled data . To answer the question , we first follow ( Koh et al. , 2019 ) to measure the influence of each training sample on model ’ s predictions and identify the harmful training samples which would cause an increase in the test loss . Next , we investigate whether relabeling the identified harmful samples rather than discarding them can improve the test performance . To achieve this , we start from binary classification tasks where relabeling a training sample is to convert its binary label from y to 1− y . We theoretically prove that relabeling harmful training data via influence analysis can achieve lower test loss than simply discarding them for binary classification . Furthermore , we design a novel relabeling function R for multi-class classification tasks and prove that the advantage of relabeling the identified harmful samples using R in reducing model ’ s test loss still holds . Following the influence-based resampling approaches ( Wang et al. , 2018 ; Ting & Brochu , 2018 ; Ren et al. , 2020 ; Wang et al. , 2020b ) , we only use the test loss for theoretical analysis and empirically calculate influence function with a small but unbiased validation set by assuming the validation set is sampled from the same distribution as the test set . In this way , using the validation loss to calculate the influence function is an unbiased estimation of the true influence function . Otherwise , the problem may lie in the category of transfer learning which is beyond the scope of this work . To summarize , this work makes the following contributions . First , we propose to combine influence functions with data relabeling for reducing training biases and we develop an end-to-end influencebased relabeling framework named RDIA that reuses harmful training samples toward better model performance . Second , we design a novel relabeling function R and theoretically prove that applying R over harmful training samples identified by influence functions is able to achieve lower test loss for any classification tasks using cross-entropy loss function . Third , we conduct extensive experiments on real datasets in different domains . The results demonstrate that ( i ) RDIA is effective in reusing harmful training samples towards higher model performance , surpassing the existing influence-based resampling approaches , and ( ii ) RDIA improves model ’ s robustness to label noise , outperforming the current resampling methods by large margins . 2 BACKGROUND . Let D = { ( xi , yi ) ∈ X × Y | 1 ≤ i ≤ N } be the training set which are sampled from Ptrain ( x , y ) . Let zi = ( xi , yi ) where xi ∈ Rd and yi ∈ RK . Let ϕ ( x , θ ) be a model ’ s prediction for x , where θ ∈ Rp is the parameter set of the model . We denote the loss of sample zi by l ( zi , θ ) = L ( yi , ϕ ( xi , θ ) ) and use li ( θ ) to represent l ( zi , θ ) . We consider the standard empirical risk minimization ( ERM ) as the optimization objective . Formally , the empirical risk over D is defined as : L ( D ; θ ) = 1N ∑N i=1 li ( θ ) . Since our relabeling function is dependent to the loss function , we focus on the most effective and versatile loss , i.e. , Cross Entropy loss for any classification tasks . Influence functions . Influence functions , stemming from Robust Statistics ( Huber , 2004 ) , have provided an efficient way to estimate how a small perturbation of a training sample would change the model ’ s predictions ( Koh & Liang , 2017 ; Koh et al. , 2019 ; Yu et al. , 2020 ) . Let θ̂ = arg minθ 1 N ∑N n=1 ln ( θ ) be the optimal model parameters on convergence . When upweighting a training sample zi on its loss term by an infinitesimal step i , we obtain the new optimal parameters θ̂ i on convergence as : θ̂ i = arg min θ 1 N ∑N n=1 ln ( θi ) + ili ( θ ) . Based on influence functions ( Cook & Weisberg , 1980 ; Koh & Liang , 2017 ) , we have the following closed-form expression to estimate the change in model parameters when upweighting zi by i : ψθ ( zi ) , dθ̂ i d i | i=0= −H −1 θ̂ Oθli ( θ̂ ) , ( 1 ) whereHθ̂ , 1 N ∑N n=1 O 2 θln ( θ̂ ) is the Hessian matrix and O 2 θln ( θ ) is the second derivative of the loss at training point zn with respect to θ . Using the chain rule , we can estimate the change of model ’ s prediction at a test data zcj sampled from the given test distribution Ptest ( Koh & Liang , 2017 ) : Φθ ( zi , z c j ) , dlj ( θ̂ i ) d i | i=0= −Oθlj ( θ̂ ) H −1 θ̂ Oθli ( θ̂ ) . ( 2 ) At a fine-grained level , we can measure the influence of perturbing training sample zi from ( xi , yi ) to ( xi , yi + δ ) . Let ziδ = ( xi , yi + δ ) and the new loss li ( ziδ , θ ) = L ( yi + δ , ϕ ( xi , θ ) ) . According to ( Koh & Liang , 2017 ) , the optimal parameters θ̂ iδi after performing perturbation on zi is θ̂ iδi = arg min θ 1 N ∑N n=1 ln ( θ ) + ili ( ziδ , θ ) − ili ( θ ) . This allows us to estimate the change in model parameters after the fine-grained data perturbation using influence functions as : dθ̂ iδi d i | i=0 = ψθ ( ziδ ) − ψθ ( zi ) = −H−1 θ̂ ( Oθli ( ziδ , θ̂ ) − Oθli ( θ̂ ) ) . ( 3 ) Further , the influence of perturbing zi by ziδ on model ’ s prediction at test sample zcj is the following : ηθδ ( zi , z c j ) , dlj ( θ̂ iδi ) d i | i=0 = −Oθlj ( θ̂ ) H−1θ̂ ( Oθli ( ziδ , θ̂ ) − Oθli ( θ̂ ) ) . ( 4 ) It is important to notice that Eq . ( 4 ) holds for arbitrary δ when i is approaching 0 . This provides the feasibility of measuring how relabeling a training samples could influence the model ’ s predictions . Influence-based resampling approaches . Previous researches ( Koh & Liang , 2017 ; Wang et al. , 2020b ) have shown that influence functions have strong ability to identify harmful samples from the whole training set , which is agnostic to the specific model or data structure . Inspired by this , many influence-based resampling approaches ( Ting & Brochu , 2018 ; Wang et al. , 2018 ; 2020b ) proposed to discard or downweight the identified harmful samples to reduce the test loss . However , different from previous works which focus on estimating the influence of each training sample on the test performance using Eq . ( 1 ) - ( 2 ) , we perform the fine-grained perturbation on a training sample ’ s label and evaluate its influence using Eq . ( 3 ) - ( 4 ) . Further , our work tries to build an end-to-end influence-based relabeling framework to reuse the harmful samples with a theoretical guarantee on the final model performance for any classification tasks . To be specific , we demonstrate that harmful training instances after being relabeled properly do make contributions to improve the final model performance , which provides a novel viewpoint on handling biased training data . | The authors present an approach and framework to mitigate training biases by combining influence functions and data relabeling. The idea behind training biases is that part of the data that is used to train the model does not accurately represent the real data distribution seen in the test set. Thus, having a mismatch between training and test data. This creates a generalization problem for the machine learning model. Other authors have used different resampling approaches to try and address this problem, relying on the training loss and then relabeling the data; or by using influence functions and changing the weight of the harmful examples so that the effect on the test loss is lower. The current authors combine both approaches, and present a framework that relabels harmful training data based on influence functions (on the test set). The results of their experiments show that they are able to reduce the test loss compared to other data resampling approaches. | SP:508e62e7f8cd002f2ee01c2ba032ce79a1f3a469 |
Learning Causal Relationships from Conditional Moment Restrictions by Importance Weighting | 1 INTRODUCTION . Consider learning the causal relationship between airline ticket prices and demand . As one might expect , prices and demand rise and fall through the seasons , being affected by other events like vacation periods , which are called confounders and may or may not be observable . Due to confounders , naively inferring from this pattern that higher ( lower ) prices increase ( decrease ) demand would be incorrect , and potentially detrimental . Thus , controlling for confounding effects is essential . This issue frequently arises in practice , especially when learning causal ( structural ) relationships is essential to answer counterfactual questions regarding policy intervention and outcome . One approach to deal with confounding effects ( like in the airline example above ) is the instrumental variable ( IV ) approach ( Wooldridge , 2002 ; Greene , 2003 ) . In the IV approach , the conditional moment restriction is defined as such that the causal model satisfies the restriction of zero expected value given IVs , thus conditioning out the confounding effect . The simplest representation of this idea is the two-step least squares ( 2SLS ) for linear models ( Wooldridge , 2002 ; Greene , 2003 ) . However , given the complex nature of the causal effect and confounding effect ( and their relation ) , assuming a linear relation can be too strong . Thus , in this paper , we focus on nonparametric IV ( NPIV ) regressions , allowing for much more flexible estimation ( Newey & Powell , 2003 ) . NPIV can be viewed as an instance of a more general framework of causal inference under conditional moment restrictions . In this light , the machinery for inference under conditional moment restrictions also applies to NPIV , as well as its shortcomings . One major issue with using the conditional moment restrictions for causal inference is that one must approximate the conditional expectation , which is often difficult to do ( see , e.g. , Newey ( 1993 ) ; Donald et al . ( 2003 ) for parametric and Newey & Powell ( 2003 ) ; Ai & Chen ( 2003 ) for nonparametric IVs using sieves ) . For instance , Lewbel ( 2007 ) and Otsu ( 2007 ) estimate the conditional expectation by local kernel density estimation . However , local kernel density estimation suffers under high dimensionality . For this problem , recent methods suggest the use of machine learning methods , such as neural networks ( Hartford et al. , 2017 ) . In this paper , we propose transforming conditional moment restrictions into unconditional moment restrictions by importance weighting using the conditional density ratio , which is defined as the ratio of the conditional probability density , conditioned on the IVs , to the unconditional probability density . We show that the unconditional expectation of a random variable weighted by the conditional density ratio is equal to the conditional expectation . Further , we show that it is possible to estimate the conditional density ratio with the least-squares method with a neural network . Once the conditional density ratio is estimated , the usual method of moments , such as GMM , can be used straightforwardly . The contribution of this paper is as follows : ( i ) we propose a novel approach to convert conditional moment restrictions to unconditional moment restrictions by importance weighting ; ( ii ) using our proposed transformation , we develop methods for NPIV ; ( iii ) we analyze the estimation error 2 SETUP AND NOTATION . Among various problems of learning causal relationships from conditional moment restrictions , we focus on the NPIV regression for ease of discussion . Note that our proposed method can be applied to more general settings , similar to Ai & Chen ( 2003 ) . Suppose that the observations { ( Yi , Xi , Zi ) } ni=1 are i.i.d. , where Yi ∈ Y ⊆ R is an observable scalar random variable , Xi ∈ X ⊆ RdX is a dX dimensional explanatory variable , Zi ∈ Z ⊆ RdZ is a dZ dimensional random variable , called an IV , and X and Z are compact with nonempty interior . We also assume that the probability densities of ( Yi , Xi , Zi ) , ( Yi , Xi ) , Zi exist and denote them by p ( y , x , z ) , p ( y , x ) , and p ( z ) , respectively . Let us define the causal relationships between Yi and Xi as Yi = f ∗ ( Xi ) + εi , where f∗ : X → Y is a structural function , εi is the sub-Gaussian error term with mean zero . To learn f∗ , suppose the IV Zi satisfies the following conditional moment restrictions : E [ εi|Zi ] = 0 ∀i ∈ { 1 , 2 , . . . , n } . ( 1 ) Then , we also assume that under the conditional moment restriction , we can uniquely identify f∗ . Our goal is to learn f∗ from the conditional moment restrictions in ( 1 ) . If Zi = Xi , this problem boils down to the estimation of the conditional expectation ( regression function ) E [ Yi|Xi ] . However , when E [ εi|Xi ] 6= 0 , E [ Yi|Xi ] is not equivalent to f∗ ( Xi ) ; that is , typical regression analysis , such as least squares , may not return the correct estimate of f∗ ( Xi ) . 3 PRELIMINARIES AND LITERATURE REVIEW . In this section , we briefly review causal inference under moment restrictions . 3.1 IV METHOD FOR LINEAR STRUCTURAL FUNCTIONS . One of the basic cases of using the IV is when f∗ is a linear model X > i θ ∗ with dX dimensional vector θ∗ and the error term εi is correlated with the explanatory variable Xi . In this case , the parameter θ∗ of the linear model can be estimated if there are IVs of dimension dX or more that satisfy the unconditional moment restrictions , E [ Ziεi ] = 0dZ , where 0d is a d dimensional zero vector . In the just-identified case ( dX = dZ ) , we can estimate θ∗ as θ̂IV = ( 1 n ∑n i=1 ZiX > i ) −1 1 n ∑n i=1 ZiYi . In the over-identified case ( dX ≤ dZ ) , we can estimate it by the 2SLS . In the 2SLS , we first regress Xi by Zi ; then using a dX dimensional function ĝ ( Zi ) ob- tained from the first stage regression , we estimate θ∗ as θ̂2SLS = ( 1 n ∑n i=1 X̂iX̂ > i ) −1 1 n ∑n i=1 X̂iYi , where X̂d , i = Z > i ( 1 n ∑n j=1 ZjZ > j ) −1 1 n ∑n j=1 ZjXd , j and X̂i = ( X̂1 , i , . . . , X̂dX , i ) > . More generally , we can formulate the estimation of the linear structural function by unconditional moment restrictions defined using the IV ; that is , E [ m ( θ∗ ; Yi , Xi , Zi ) ] = 0dm , where θ∗ ∈ Θ is a parameter representing the causal effect , Θ is the parameter space , and m : Θ×R×X×Z → Rdm is a dm dimensional moment function . Then , the GMM estimator is defined as θ̂GMM = arg minθ∈Θ ( 1 n ∑n i=1m ( θ ; Yi , Xi , Zi ) ) > Wdm ( 1 n ∑n i=1m ( θ ; Yi , Xi , Zi ) ) , where Wdm is a dm × dm weight matrix . If Wdm is chosen as W ∗dm ∝ E [ m ( θ ∗ ; Yi , Xi , Zi ) > m ( θ∗ ; Yi , Xi , Zi ) ] , the estimator θ̂GMM is efficient under the posited moment conditions . Note that the GMM includes the 2SLS as a special case where εi has the same variance among i ∈ { 1 , . . . , n } and the zero covariance . Three methods have been proposed to obtain W ∗dm ; two-step GMM , iterative GMM ( Hayashi , 2000 ) , and continuous updating GMM ( CU estimator ; CUE , Hansen et al. , 1996 ) . These estimators have the same asymptotic distribution , but different non-asymptotic properties . In particular , CUE is known as a special case of the generalized empirical likelihood ( GEL ) estimator ( Owen , 1988 ; Smith , 1997 ) , which plays an important role in estimation with many moment restrictions ( Newey & Smith , 2004 ) . 3.2 NPIV AND LEARNING FROM CONDITIONAL MOMENT RESTRICTIONS . In linear structural functions , zero covariance between the IV and error term suffices for identification . However , in nonparametric structural functions , we require a stronger restriction : the error term has conditional expectation zero given the IVs . Then , we can characterize the solution of NPIV as that of an integral equation K ( z ) = E [ Yi|Zi = z ] = E [ f∗ ( Xi ) |z ] = ∫ f∗ ( x ) dF ( x|z ) with F denoting the conditional c.d.f of x given z . The identification also results in the uniqueness of the solution . Several estimators have been proposed . Newey & Powell ( 2003 ) proposes a nonparametric analogue of the 2SLS for linear structural functions . They first define the linear-in-parameter model as a model to approximate the nonparametric structural function f∗ , where they use a linear approximation with basis expansion , such as sieve ( series ) regression . Let us denote the approximation as f∗ ( Xi ) ≈ ψ ( Xi ) > θ∗ , where ψ : X → Rdψ , dψ ≤ n is a vector-valued function consisting of outputs of basis functions . Then , they conduct the 2SLS as follows : ( i ) they define a linear-in-parameter model as an approximation to the nonparametric model of Zi , and estimate E [ ψ ( Xi ) |Zi ] using that model ; ( ii ) they regress Yi by E [ ψ ( Xi ) |Zi ] to estimate θ∗ . In contrast , Ai & Chen ( 2003 ) proposes a nonparametric analogue of the GMM . Ai & Chen ( 2003 ) also approximate the nonparametric structural function f∗ by a linear-in-parameter model . However , unlike Newey & Powell ( 2003 ) , Ai & Chen ( 2003 ) estimates E [ ( Yi − f ( Xi ) ) |Zi ] , instead of estimating E [ ψ ( Xi ) |Zi ] , by another linear-in-parameter model . Using the estimator of E [ ( Yi−f ( Xi ) ) |Zi ] , Ai & Chen ( 2003 ) transforms conditional moment restrictions into unconditional ones and apply the GMM to estimate f∗ . In addition to these two typical methods , Darolles et al . ( 2011 ) , Carrasco et al . ( 2007 ) propose their own methods for NPIV . This is how the NPIV problem is typically cast into the framework of conditional moment restrictions , where a parameter representing the causal relationship satisfies E [ m ( θ∗ ; Yi , Xi , Zi ) |Zi ] = 0dm . As with NPIV , we often define the moment function as a function of the nonparametric function f instead of the parameter θ . We can estimate f∗ defined under conditional moment restrictions using variants of GMM or empirical likelihood , e.g. , Ai & Chen ( 2003 ) ; Domínguez & Lobato ( 2004 ) ; Otsu ( 2007 ) ; Lewbel ( 2007 ) , by transforming the conditional moment restrictions to unconditional ones . | The present paper proposes an importance weighting approach to address the issue of regression under conditional moment restrictions in the context of non-parametric instrumental variables. The authors provide error bounds on the learned structural function. They show that their approach has a convergence rate of $\mathcal{O}(1/\sqrt{n})$. The main contributions of this work are presented in Sections 4 and 5, where they first introduce their method based on re-weighting by an estimated conditional density ratio function and then provide an estimation error analysis. The theoretical results are complemented with two synthetic experiments which show that the proposed method can compete with and in certain cases improves upon state of the art. | SP:0ec9a162aba8bbd0d7d0eb165271c07877ac6452 |
Learning Causal Relationships from Conditional Moment Restrictions by Importance Weighting | 1 INTRODUCTION . Consider learning the causal relationship between airline ticket prices and demand . As one might expect , prices and demand rise and fall through the seasons , being affected by other events like vacation periods , which are called confounders and may or may not be observable . Due to confounders , naively inferring from this pattern that higher ( lower ) prices increase ( decrease ) demand would be incorrect , and potentially detrimental . Thus , controlling for confounding effects is essential . This issue frequently arises in practice , especially when learning causal ( structural ) relationships is essential to answer counterfactual questions regarding policy intervention and outcome . One approach to deal with confounding effects ( like in the airline example above ) is the instrumental variable ( IV ) approach ( Wooldridge , 2002 ; Greene , 2003 ) . In the IV approach , the conditional moment restriction is defined as such that the causal model satisfies the restriction of zero expected value given IVs , thus conditioning out the confounding effect . The simplest representation of this idea is the two-step least squares ( 2SLS ) for linear models ( Wooldridge , 2002 ; Greene , 2003 ) . However , given the complex nature of the causal effect and confounding effect ( and their relation ) , assuming a linear relation can be too strong . Thus , in this paper , we focus on nonparametric IV ( NPIV ) regressions , allowing for much more flexible estimation ( Newey & Powell , 2003 ) . NPIV can be viewed as an instance of a more general framework of causal inference under conditional moment restrictions . In this light , the machinery for inference under conditional moment restrictions also applies to NPIV , as well as its shortcomings . One major issue with using the conditional moment restrictions for causal inference is that one must approximate the conditional expectation , which is often difficult to do ( see , e.g. , Newey ( 1993 ) ; Donald et al . ( 2003 ) for parametric and Newey & Powell ( 2003 ) ; Ai & Chen ( 2003 ) for nonparametric IVs using sieves ) . For instance , Lewbel ( 2007 ) and Otsu ( 2007 ) estimate the conditional expectation by local kernel density estimation . However , local kernel density estimation suffers under high dimensionality . For this problem , recent methods suggest the use of machine learning methods , such as neural networks ( Hartford et al. , 2017 ) . In this paper , we propose transforming conditional moment restrictions into unconditional moment restrictions by importance weighting using the conditional density ratio , which is defined as the ratio of the conditional probability density , conditioned on the IVs , to the unconditional probability density . We show that the unconditional expectation of a random variable weighted by the conditional density ratio is equal to the conditional expectation . Further , we show that it is possible to estimate the conditional density ratio with the least-squares method with a neural network . Once the conditional density ratio is estimated , the usual method of moments , such as GMM , can be used straightforwardly . The contribution of this paper is as follows : ( i ) we propose a novel approach to convert conditional moment restrictions to unconditional moment restrictions by importance weighting ; ( ii ) using our proposed transformation , we develop methods for NPIV ; ( iii ) we analyze the estimation error 2 SETUP AND NOTATION . Among various problems of learning causal relationships from conditional moment restrictions , we focus on the NPIV regression for ease of discussion . Note that our proposed method can be applied to more general settings , similar to Ai & Chen ( 2003 ) . Suppose that the observations { ( Yi , Xi , Zi ) } ni=1 are i.i.d. , where Yi ∈ Y ⊆ R is an observable scalar random variable , Xi ∈ X ⊆ RdX is a dX dimensional explanatory variable , Zi ∈ Z ⊆ RdZ is a dZ dimensional random variable , called an IV , and X and Z are compact with nonempty interior . We also assume that the probability densities of ( Yi , Xi , Zi ) , ( Yi , Xi ) , Zi exist and denote them by p ( y , x , z ) , p ( y , x ) , and p ( z ) , respectively . Let us define the causal relationships between Yi and Xi as Yi = f ∗ ( Xi ) + εi , where f∗ : X → Y is a structural function , εi is the sub-Gaussian error term with mean zero . To learn f∗ , suppose the IV Zi satisfies the following conditional moment restrictions : E [ εi|Zi ] = 0 ∀i ∈ { 1 , 2 , . . . , n } . ( 1 ) Then , we also assume that under the conditional moment restriction , we can uniquely identify f∗ . Our goal is to learn f∗ from the conditional moment restrictions in ( 1 ) . If Zi = Xi , this problem boils down to the estimation of the conditional expectation ( regression function ) E [ Yi|Xi ] . However , when E [ εi|Xi ] 6= 0 , E [ Yi|Xi ] is not equivalent to f∗ ( Xi ) ; that is , typical regression analysis , such as least squares , may not return the correct estimate of f∗ ( Xi ) . 3 PRELIMINARIES AND LITERATURE REVIEW . In this section , we briefly review causal inference under moment restrictions . 3.1 IV METHOD FOR LINEAR STRUCTURAL FUNCTIONS . One of the basic cases of using the IV is when f∗ is a linear model X > i θ ∗ with dX dimensional vector θ∗ and the error term εi is correlated with the explanatory variable Xi . In this case , the parameter θ∗ of the linear model can be estimated if there are IVs of dimension dX or more that satisfy the unconditional moment restrictions , E [ Ziεi ] = 0dZ , where 0d is a d dimensional zero vector . In the just-identified case ( dX = dZ ) , we can estimate θ∗ as θ̂IV = ( 1 n ∑n i=1 ZiX > i ) −1 1 n ∑n i=1 ZiYi . In the over-identified case ( dX ≤ dZ ) , we can estimate it by the 2SLS . In the 2SLS , we first regress Xi by Zi ; then using a dX dimensional function ĝ ( Zi ) ob- tained from the first stage regression , we estimate θ∗ as θ̂2SLS = ( 1 n ∑n i=1 X̂iX̂ > i ) −1 1 n ∑n i=1 X̂iYi , where X̂d , i = Z > i ( 1 n ∑n j=1 ZjZ > j ) −1 1 n ∑n j=1 ZjXd , j and X̂i = ( X̂1 , i , . . . , X̂dX , i ) > . More generally , we can formulate the estimation of the linear structural function by unconditional moment restrictions defined using the IV ; that is , E [ m ( θ∗ ; Yi , Xi , Zi ) ] = 0dm , where θ∗ ∈ Θ is a parameter representing the causal effect , Θ is the parameter space , and m : Θ×R×X×Z → Rdm is a dm dimensional moment function . Then , the GMM estimator is defined as θ̂GMM = arg minθ∈Θ ( 1 n ∑n i=1m ( θ ; Yi , Xi , Zi ) ) > Wdm ( 1 n ∑n i=1m ( θ ; Yi , Xi , Zi ) ) , where Wdm is a dm × dm weight matrix . If Wdm is chosen as W ∗dm ∝ E [ m ( θ ∗ ; Yi , Xi , Zi ) > m ( θ∗ ; Yi , Xi , Zi ) ] , the estimator θ̂GMM is efficient under the posited moment conditions . Note that the GMM includes the 2SLS as a special case where εi has the same variance among i ∈ { 1 , . . . , n } and the zero covariance . Three methods have been proposed to obtain W ∗dm ; two-step GMM , iterative GMM ( Hayashi , 2000 ) , and continuous updating GMM ( CU estimator ; CUE , Hansen et al. , 1996 ) . These estimators have the same asymptotic distribution , but different non-asymptotic properties . In particular , CUE is known as a special case of the generalized empirical likelihood ( GEL ) estimator ( Owen , 1988 ; Smith , 1997 ) , which plays an important role in estimation with many moment restrictions ( Newey & Smith , 2004 ) . 3.2 NPIV AND LEARNING FROM CONDITIONAL MOMENT RESTRICTIONS . In linear structural functions , zero covariance between the IV and error term suffices for identification . However , in nonparametric structural functions , we require a stronger restriction : the error term has conditional expectation zero given the IVs . Then , we can characterize the solution of NPIV as that of an integral equation K ( z ) = E [ Yi|Zi = z ] = E [ f∗ ( Xi ) |z ] = ∫ f∗ ( x ) dF ( x|z ) with F denoting the conditional c.d.f of x given z . The identification also results in the uniqueness of the solution . Several estimators have been proposed . Newey & Powell ( 2003 ) proposes a nonparametric analogue of the 2SLS for linear structural functions . They first define the linear-in-parameter model as a model to approximate the nonparametric structural function f∗ , where they use a linear approximation with basis expansion , such as sieve ( series ) regression . Let us denote the approximation as f∗ ( Xi ) ≈ ψ ( Xi ) > θ∗ , where ψ : X → Rdψ , dψ ≤ n is a vector-valued function consisting of outputs of basis functions . Then , they conduct the 2SLS as follows : ( i ) they define a linear-in-parameter model as an approximation to the nonparametric model of Zi , and estimate E [ ψ ( Xi ) |Zi ] using that model ; ( ii ) they regress Yi by E [ ψ ( Xi ) |Zi ] to estimate θ∗ . In contrast , Ai & Chen ( 2003 ) proposes a nonparametric analogue of the GMM . Ai & Chen ( 2003 ) also approximate the nonparametric structural function f∗ by a linear-in-parameter model . However , unlike Newey & Powell ( 2003 ) , Ai & Chen ( 2003 ) estimates E [ ( Yi − f ( Xi ) ) |Zi ] , instead of estimating E [ ψ ( Xi ) |Zi ] , by another linear-in-parameter model . Using the estimator of E [ ( Yi−f ( Xi ) ) |Zi ] , Ai & Chen ( 2003 ) transforms conditional moment restrictions into unconditional ones and apply the GMM to estimate f∗ . In addition to these two typical methods , Darolles et al . ( 2011 ) , Carrasco et al . ( 2007 ) propose their own methods for NPIV . This is how the NPIV problem is typically cast into the framework of conditional moment restrictions , where a parameter representing the causal relationship satisfies E [ m ( θ∗ ; Yi , Xi , Zi ) |Zi ] = 0dm . As with NPIV , we often define the moment function as a function of the nonparametric function f instead of the parameter θ . We can estimate f∗ defined under conditional moment restrictions using variants of GMM or empirical likelihood , e.g. , Ai & Chen ( 2003 ) ; Domínguez & Lobato ( 2004 ) ; Otsu ( 2007 ) ; Lewbel ( 2007 ) , by transforming the conditional moment restrictions to unconditional ones . | The paper presents a method for estimating causal effects (or more generally non/parametric functions) under conditional moment restrictions, focusing in this case on nonparametric IVs. The main idea is casting these conditional restrictions to an unconditional version through importance resampling using a conditional density estimator (e.g. based on a least-squares method with a NN). The paper also provides a theoretical analysis of the estimation error, providing am error bound (Thm1). The method is evaluated based on three econometrics datasets from literature, showing a smaller MSE than the compared methods. | SP:0ec9a162aba8bbd0d7d0eb165271c07877ac6452 |
POLAR: A Polynomial Arithmetic Framework for Verifying Neural-Network Controlled Systems | 1 INTRODUCTION . Neural networks have been increasingly used as the central decision makers in a variety of tasks Mnih et al . ( 2015 ) ; Lillicrap et al . ( 2016 ) ; Pan et al . ( 2018 ) . However , the use of neural-network controllers also gives rise to new challenges on verifying the correctness of the resulting closed-loop control systems especially in safety-critical settings . In this paper , we consider the reachability verification problem of neural-network controlled systems ( NNCSs ) . The high-level architecture of a simple NNCS is shown in Figure 1 in which the neural network senses the system state , i.e . the value of x , at discrete time steps , and computes the corresponding control values u for updating the system dynamics which is defined by an ordinary differential equation ( ODE ) over x and u . The boundedtime reachability analysis problem of an NNCS is to compute an ( over-approximated ) reachable set that contains all the trajectories starting from the initial set for a finite number of control steps . Figure 2 shows an illustration of reachable sets for 4 steps , where the orange regions are the reachable set , and two red arrowed curves are two exemplary trajectories starting from the initial set X0 ( blue ) . Reachability analysis of general NNCSs is notoriously difficult due to nonlinearity in both the neural-network controller and the plant , further exacerbated by the coupling of the controller and the plant . Since exact reachability of general nonlinear systems is undecidable Alur & Dill ( 1994 ) , current approaches for reachability analysis of nonlinear dynamical systems aim at computing a tight overapproximation of the reachable sets Dreossi et al . ( 2016 ) ; Lygeros et al . ( 1999 ) ; Yang et al . ( 2016 ) ; Prajna & Jadbabaie ( 2004 ) ; Huang et al . ( 2017a ) ; Frehse et al . ( 2011 ) ; Chen et al . ( 2013 ) ; Althoff ( 2015 ) . Earlier works on NNCS verification drew on techniques for computing the output ranges of neural networks Huang et al . ( 2017b ) ; Katz et al . ( 2017 ) ; Dutta et al . ( 2018 ) ; Wang et al . ( 2018 ) ; Weng et al . ( 2018 ) ; Zhang et al . ( 2018 ) ; Singh et al . ( 2019 ) when integrating with the aforementioned reachability analysis . However , they have been shown to be ineffective for NNCS verification due to the lack of consideration on the interaction between the neural-network controller and the plant dynamics Dutta et al . ( 2019 ) ; Huang et al . ( 2019 ) . In particular , since their primary goal is to bound the output range of the neural network instead of approximating its input-output function , Figure 1 : A typical NNCS model . they can not be used to track dependency across the closed-loop system and across multiple time steps in reachability analysis . Direct end-to-end approximation Dutta et al . ( 2019 ) ; Huang et al . ( 2019 ) ; Fan et al . ( 2020 ) and layer-by-layer propagation Ivanov et al . ( 2019 ; 2021b ; a ) are the two main categories of approaches in the NNCS verification literature . The former directly computes a function approximation of the neural network and suffers from efficiency problems , i.e . they can not handle systems with more than a few dimensions due to the need to sample in the input space . The latter approach tries to exploit the neural network structure and uses Taylor model ( TM ) arithmetic to more efficiently obtain a function approximation by propagating the TMs layer by layer . A Taylor model ( p , I ) consists of a polynomial p for point-wise approximation , and an interval remainder I to bound the approximation error . However , due to the limitations of basic TM arithmetic , these approaches can not handle non-differentiable activation functions and suffer from rapid growth of the interval remainders during propagation which effectively degrades the analysis to an interval analysis . In this paper , we tackle the challenge of dependency tracking by constructing a function overapproximation , specifically a Taylor model ( TM ) approximation , of the neural-network controller using a unified polynomial arithmetic framework ( POLAR ) that enables precise layer-by-layer propagation of TMs for general feed-forward neural networks . POLAR addresses the key challenges of applying basic TM arithmetic through a novel use of univariate Bernstein polynomial interpolation to handle non-differentiable activation functions and symbolic remainders to taper the growth of the interval remainders . For the former , we leverage an efficient sampling-based analysis to provide a sound overapproximation . The latter avoids the so-called wrapping effect Jaulin et al . ( 2001 ) that can lead to rapid growth of the interval remainders in linear mappings . In summary , our work makes the following novel contributions . • We propose a novel polynomial arithmetic framework for bounded-time reachability analysis of NNCSs , keeping track of state-wise dependency across the closed-loop system . Compared to existing propagation-based approaches , our framework has the advantage of being able to handle NN controllers with general and heterogeneous activation functions . • We propose neuron-wise Bernstein polynomial interpolation and show that it can be seamlessly integrated with Taylor model approximations . In addition , we present the first application of symbolic remainders to tightening the overapproximation of neural network behaviors . • We conduct a comprehensive comparison of our approach with state-of-the-art techniques , including an evaluation on the full set of benchmarks published in these works , showing the overwhelming advantage of our proposed approach . 2 PRELIMINARIES . A Neural-Network Controlled System ( NNCS ) is a continuous plant governed by a neural network controller . The plant dynamics is defined by an ODE of the form ẋ = f ( x , u ) wherein the state variables and control inputs are denoted by the vectors x and u respectively . We assume that the function f is at least locally Lipschitz continuous such that its solution w.r.t . an initial state and constant control inputs is unique Meiss ( 2007 ) . We denote the input-output mapping of the neural network controller as κ , and the controller is triggered every δc time which is also called the control stepsize , then a system execution ( trajectory ) is produced in the following way . Starting from an initial state x ( 0 ) , the controller senses the system state at the beginning of every control step t=jδc for j=0 , 1 , . . . , and update the control inputs to vj=κ ( x ( jδc ) ) . The system ’ s dynamics in that control step is governed by the ODE ẋ=f ( x , vj ) . Given an initial state set X0 ⊂ Rn , all executions from a state in it can be formally defined by a flowmap function ϕN : X0 × R≥0 → Rn , such that the system state at any time t ≥ 0 from any initial state x0 ∈ X0 is ϕN ( x0 , t ) . We call a state x′ ∈ Rn reachable if there exists x0 ∈ X0 and t ≥ 0 such that x′ = ϕN ( x0 , t ) . The reachability problem on NNCS is to decide whether a state is reachable in a given NNCS , and it is undecidable due to their higher expressiveness than the two-counter machines on which the reachability problem is undecidable Alur & Dill ( 1994 ) . Many formal verification problems can be reduced to the reachability problem . For example , the safety verification problem can be reduced to the reachability problem to an unsafe state . In the paper , we focus on computing the reachable set for an NNCS over a bounded number K of control steps . Since flowmap ϕN often does not have a closed form due to the nonlinear ODEs , we seek to compute state-wise overapproximations for it over time segments , that is , in each control step [ ( j − 1 ) δc , jδc ] for j = 1 , . . . , K , the reachable set is overapproximated by a group of flowpipes F1 ( x0 , τ ) , . . . , FN ( x0 , τ ) over the N uniformly subdivided time segments of the time interval , such that Fi ( x0 , τ ) is a state-wise overapproximation of ϕN ( x0 , ( j − 1 ) δc + ( i− 1 ) δ + τ ) for τ ∈ [ 0 , δc/N ] , i.e. , Fj ( x0 , τ ) contains the exact reachable state from any initial state x0 in the i-th time segment of the j-th control step . Here , τ is the local time variable which is independent in each flowpipe . A high-level flowpipe construction algorithm is presented as follows , in which X̂0 = X0 and δ = δc/N which is called the flowpipe step or time step . 1 : for j = 1 to K do 2 : Computing the an overapproximation Ûj−1 for the control input range κ ( X̂j−1 ) ; 3 : Computing the flowpipes F1 ( x0 , τ ) , . . . , FN ( x0 , τ ) for the continuous dynamics ẋ = f ( x , u ) , u̇ = 0 from the initial set x ( 0 ) ∈ X̂j−1 , u ( 0 ) ∈ Ûj−1 ; 4 : R ← R∪ { ( F1 ( x0 , τ ) , . . . , FN ( x0 , τ ) } ; 5 : X̂j ← FN ( z , δ ) ; 6 : end for Notice that x ( 0 ) denotes the local initial set for the ODE used in the current control step , while the variables x0 in a flowpipe are the symbolic representation of an initial state in X0 . Intuitively , a flowpipe overapproximates not only the reachable set in a time step , but also the dependency from an initial state to its reachable state at a particular time . In the paper , we use Taylor models to represent the flowpipes . Taylor models are originally proposed to compute higher-order overapproximations for the ranges of continuous functions ( see Berz & Makino ( 1998 ) ) . They can be viewed as a higher-order extension of intervals Moore et al . ( 2009 ) which are sets of real numbers between lower and upper real bounds , e.g. , the interval [ a , b ] wherein a ≤ b represents the set of { x | a ≤ x ≤ b } . A Taylor model ( TM ) is a pair ( p , I ) wherein p is a polynomial of degree k over a finite group of variables x1 , . . . , xn ranging in an interval domain D ⊂ Rn , and I is the remainder interval . The range of a TM is the Minkowski sum of the range of its polynomial and the remainder interval . Thereby we sometimes intuitively denote a TM ( p , I ) by p+ I in the paper . TMs are often used as overapproximations for smooth functions . Given a smooth function f ( x ) with x ∈ D , its order k TM overapproximation , or simply TM , can be obtained as ( pf ( x ) , If ) such that pf is the order k Taylor approximation of f w.r.t . a point in D , and If is a remainder interval which ensures that ∀x ∈ D. ( f ( x ) ∈ pf ( x ) + If ) . Since pf is also over the same variables x , the overestimation in a TM can be estimated only on the width of If , i , e. , if the remainder is zero , then pf defines the same mapping as f . At this point , a TM serves more like an overapproximate function than just a range overapproximation such as intervals or polyhedra . TM arithmetic . TMs are closed under operations such as addition , multiplication , and integration ( see Makino & Berz ( 2003 ) ) . Given functions f , g that are overapproximated by TMs ( pf , If ) and ( pg , Ig ) respectively . A TM for f + g can be computed as ( pf + pg , If + Ig ) , and an order k TM for f · g can be computed as ( pf · pg − rk , If · B ( pg ) + B ( pf ) · Ig + If ·Ig + B ( rk ) ) wherein B ( p ) denotes an interval enclosure of the range of p , and the truncated part rk consists of the terms in pf · pg of degrees > k. Similar to reals and intervals , TMs can also be organized as vectors and matrices to overapproximate the functions whose ranges are multidimensional . For a vector of TMs ( ( p1 , I1 ) , . . . , ( pn , In ) ) T such that p1 , . . . , pn are over the same variables , we collectively denote it by ( p , I ) such that p is the polynomial vector ( p1 , . . . , pn ) T and I is interval vector ( I1 , . . . , In ) T . As an example , given the TMs ( 1− 0.5x2 , [ −0.1 , 0.1 ] ) and ( x+ 0.1x4 , [ −0.2 , 0.2 ] ) over the domain x ∈ [ −1 , 1 ] . The order 4 TM for the sum is ( 1 + x − 0.5x2 + 0.1x4 , [ −0.3 , 0.3 ] ) , and the order 4 TM for the product is ( x− x3 + x4 , [ −0.38 , 0.38 ] ) . Although the technique of Taylor model flowpipe construction can be used to generate state-wise overapproximate flowpipes for ODEs ( see Berz & Makino ( 1998 ) ; Chen ( 2015 ) ) , it is still a challenge that how to compute and represent the set Ûj−1 by a TM which overapproximates the dependency from an initial state inX0 to the actual control input rangeUj−1 = { κ ( ϕN ( x0 , ( j−1 ) δc ) ) |x0 ∈ X0 } at the beginning of that control step . We illustrate this dependency in Figure 3 . Our approach aims at solving this problem . | This paper proposes POLAR, a reachability analysis framework for neural network-controlled systems. POLAR improves the previous Taylor model overapproximation approach in two aspects: 1) use Bernstein polynomial to overapproximates activation function; 2) use symbolic remainders to mitigate the so-called wrapping effect, that is, instead of computing an overapproximated interval at each step, keep track of linear transformation matrices to delay overapproximation as late as possible. The experimental evaluation compares POLAR with three other recent work and shows that POLAR achieves the best performance on almost all tasks. | SP:abd06d2f55908d07477bc45cfd4f5bd474fb84c8 |
POLAR: A Polynomial Arithmetic Framework for Verifying Neural-Network Controlled Systems | 1 INTRODUCTION . Neural networks have been increasingly used as the central decision makers in a variety of tasks Mnih et al . ( 2015 ) ; Lillicrap et al . ( 2016 ) ; Pan et al . ( 2018 ) . However , the use of neural-network controllers also gives rise to new challenges on verifying the correctness of the resulting closed-loop control systems especially in safety-critical settings . In this paper , we consider the reachability verification problem of neural-network controlled systems ( NNCSs ) . The high-level architecture of a simple NNCS is shown in Figure 1 in which the neural network senses the system state , i.e . the value of x , at discrete time steps , and computes the corresponding control values u for updating the system dynamics which is defined by an ordinary differential equation ( ODE ) over x and u . The boundedtime reachability analysis problem of an NNCS is to compute an ( over-approximated ) reachable set that contains all the trajectories starting from the initial set for a finite number of control steps . Figure 2 shows an illustration of reachable sets for 4 steps , where the orange regions are the reachable set , and two red arrowed curves are two exemplary trajectories starting from the initial set X0 ( blue ) . Reachability analysis of general NNCSs is notoriously difficult due to nonlinearity in both the neural-network controller and the plant , further exacerbated by the coupling of the controller and the plant . Since exact reachability of general nonlinear systems is undecidable Alur & Dill ( 1994 ) , current approaches for reachability analysis of nonlinear dynamical systems aim at computing a tight overapproximation of the reachable sets Dreossi et al . ( 2016 ) ; Lygeros et al . ( 1999 ) ; Yang et al . ( 2016 ) ; Prajna & Jadbabaie ( 2004 ) ; Huang et al . ( 2017a ) ; Frehse et al . ( 2011 ) ; Chen et al . ( 2013 ) ; Althoff ( 2015 ) . Earlier works on NNCS verification drew on techniques for computing the output ranges of neural networks Huang et al . ( 2017b ) ; Katz et al . ( 2017 ) ; Dutta et al . ( 2018 ) ; Wang et al . ( 2018 ) ; Weng et al . ( 2018 ) ; Zhang et al . ( 2018 ) ; Singh et al . ( 2019 ) when integrating with the aforementioned reachability analysis . However , they have been shown to be ineffective for NNCS verification due to the lack of consideration on the interaction between the neural-network controller and the plant dynamics Dutta et al . ( 2019 ) ; Huang et al . ( 2019 ) . In particular , since their primary goal is to bound the output range of the neural network instead of approximating its input-output function , Figure 1 : A typical NNCS model . they can not be used to track dependency across the closed-loop system and across multiple time steps in reachability analysis . Direct end-to-end approximation Dutta et al . ( 2019 ) ; Huang et al . ( 2019 ) ; Fan et al . ( 2020 ) and layer-by-layer propagation Ivanov et al . ( 2019 ; 2021b ; a ) are the two main categories of approaches in the NNCS verification literature . The former directly computes a function approximation of the neural network and suffers from efficiency problems , i.e . they can not handle systems with more than a few dimensions due to the need to sample in the input space . The latter approach tries to exploit the neural network structure and uses Taylor model ( TM ) arithmetic to more efficiently obtain a function approximation by propagating the TMs layer by layer . A Taylor model ( p , I ) consists of a polynomial p for point-wise approximation , and an interval remainder I to bound the approximation error . However , due to the limitations of basic TM arithmetic , these approaches can not handle non-differentiable activation functions and suffer from rapid growth of the interval remainders during propagation which effectively degrades the analysis to an interval analysis . In this paper , we tackle the challenge of dependency tracking by constructing a function overapproximation , specifically a Taylor model ( TM ) approximation , of the neural-network controller using a unified polynomial arithmetic framework ( POLAR ) that enables precise layer-by-layer propagation of TMs for general feed-forward neural networks . POLAR addresses the key challenges of applying basic TM arithmetic through a novel use of univariate Bernstein polynomial interpolation to handle non-differentiable activation functions and symbolic remainders to taper the growth of the interval remainders . For the former , we leverage an efficient sampling-based analysis to provide a sound overapproximation . The latter avoids the so-called wrapping effect Jaulin et al . ( 2001 ) that can lead to rapid growth of the interval remainders in linear mappings . In summary , our work makes the following novel contributions . • We propose a novel polynomial arithmetic framework for bounded-time reachability analysis of NNCSs , keeping track of state-wise dependency across the closed-loop system . Compared to existing propagation-based approaches , our framework has the advantage of being able to handle NN controllers with general and heterogeneous activation functions . • We propose neuron-wise Bernstein polynomial interpolation and show that it can be seamlessly integrated with Taylor model approximations . In addition , we present the first application of symbolic remainders to tightening the overapproximation of neural network behaviors . • We conduct a comprehensive comparison of our approach with state-of-the-art techniques , including an evaluation on the full set of benchmarks published in these works , showing the overwhelming advantage of our proposed approach . 2 PRELIMINARIES . A Neural-Network Controlled System ( NNCS ) is a continuous plant governed by a neural network controller . The plant dynamics is defined by an ODE of the form ẋ = f ( x , u ) wherein the state variables and control inputs are denoted by the vectors x and u respectively . We assume that the function f is at least locally Lipschitz continuous such that its solution w.r.t . an initial state and constant control inputs is unique Meiss ( 2007 ) . We denote the input-output mapping of the neural network controller as κ , and the controller is triggered every δc time which is also called the control stepsize , then a system execution ( trajectory ) is produced in the following way . Starting from an initial state x ( 0 ) , the controller senses the system state at the beginning of every control step t=jδc for j=0 , 1 , . . . , and update the control inputs to vj=κ ( x ( jδc ) ) . The system ’ s dynamics in that control step is governed by the ODE ẋ=f ( x , vj ) . Given an initial state set X0 ⊂ Rn , all executions from a state in it can be formally defined by a flowmap function ϕN : X0 × R≥0 → Rn , such that the system state at any time t ≥ 0 from any initial state x0 ∈ X0 is ϕN ( x0 , t ) . We call a state x′ ∈ Rn reachable if there exists x0 ∈ X0 and t ≥ 0 such that x′ = ϕN ( x0 , t ) . The reachability problem on NNCS is to decide whether a state is reachable in a given NNCS , and it is undecidable due to their higher expressiveness than the two-counter machines on which the reachability problem is undecidable Alur & Dill ( 1994 ) . Many formal verification problems can be reduced to the reachability problem . For example , the safety verification problem can be reduced to the reachability problem to an unsafe state . In the paper , we focus on computing the reachable set for an NNCS over a bounded number K of control steps . Since flowmap ϕN often does not have a closed form due to the nonlinear ODEs , we seek to compute state-wise overapproximations for it over time segments , that is , in each control step [ ( j − 1 ) δc , jδc ] for j = 1 , . . . , K , the reachable set is overapproximated by a group of flowpipes F1 ( x0 , τ ) , . . . , FN ( x0 , τ ) over the N uniformly subdivided time segments of the time interval , such that Fi ( x0 , τ ) is a state-wise overapproximation of ϕN ( x0 , ( j − 1 ) δc + ( i− 1 ) δ + τ ) for τ ∈ [ 0 , δc/N ] , i.e. , Fj ( x0 , τ ) contains the exact reachable state from any initial state x0 in the i-th time segment of the j-th control step . Here , τ is the local time variable which is independent in each flowpipe . A high-level flowpipe construction algorithm is presented as follows , in which X̂0 = X0 and δ = δc/N which is called the flowpipe step or time step . 1 : for j = 1 to K do 2 : Computing the an overapproximation Ûj−1 for the control input range κ ( X̂j−1 ) ; 3 : Computing the flowpipes F1 ( x0 , τ ) , . . . , FN ( x0 , τ ) for the continuous dynamics ẋ = f ( x , u ) , u̇ = 0 from the initial set x ( 0 ) ∈ X̂j−1 , u ( 0 ) ∈ Ûj−1 ; 4 : R ← R∪ { ( F1 ( x0 , τ ) , . . . , FN ( x0 , τ ) } ; 5 : X̂j ← FN ( z , δ ) ; 6 : end for Notice that x ( 0 ) denotes the local initial set for the ODE used in the current control step , while the variables x0 in a flowpipe are the symbolic representation of an initial state in X0 . Intuitively , a flowpipe overapproximates not only the reachable set in a time step , but also the dependency from an initial state to its reachable state at a particular time . In the paper , we use Taylor models to represent the flowpipes . Taylor models are originally proposed to compute higher-order overapproximations for the ranges of continuous functions ( see Berz & Makino ( 1998 ) ) . They can be viewed as a higher-order extension of intervals Moore et al . ( 2009 ) which are sets of real numbers between lower and upper real bounds , e.g. , the interval [ a , b ] wherein a ≤ b represents the set of { x | a ≤ x ≤ b } . A Taylor model ( TM ) is a pair ( p , I ) wherein p is a polynomial of degree k over a finite group of variables x1 , . . . , xn ranging in an interval domain D ⊂ Rn , and I is the remainder interval . The range of a TM is the Minkowski sum of the range of its polynomial and the remainder interval . Thereby we sometimes intuitively denote a TM ( p , I ) by p+ I in the paper . TMs are often used as overapproximations for smooth functions . Given a smooth function f ( x ) with x ∈ D , its order k TM overapproximation , or simply TM , can be obtained as ( pf ( x ) , If ) such that pf is the order k Taylor approximation of f w.r.t . a point in D , and If is a remainder interval which ensures that ∀x ∈ D. ( f ( x ) ∈ pf ( x ) + If ) . Since pf is also over the same variables x , the overestimation in a TM can be estimated only on the width of If , i , e. , if the remainder is zero , then pf defines the same mapping as f . At this point , a TM serves more like an overapproximate function than just a range overapproximation such as intervals or polyhedra . TM arithmetic . TMs are closed under operations such as addition , multiplication , and integration ( see Makino & Berz ( 2003 ) ) . Given functions f , g that are overapproximated by TMs ( pf , If ) and ( pg , Ig ) respectively . A TM for f + g can be computed as ( pf + pg , If + Ig ) , and an order k TM for f · g can be computed as ( pf · pg − rk , If · B ( pg ) + B ( pf ) · Ig + If ·Ig + B ( rk ) ) wherein B ( p ) denotes an interval enclosure of the range of p , and the truncated part rk consists of the terms in pf · pg of degrees > k. Similar to reals and intervals , TMs can also be organized as vectors and matrices to overapproximate the functions whose ranges are multidimensional . For a vector of TMs ( ( p1 , I1 ) , . . . , ( pn , In ) ) T such that p1 , . . . , pn are over the same variables , we collectively denote it by ( p , I ) such that p is the polynomial vector ( p1 , . . . , pn ) T and I is interval vector ( I1 , . . . , In ) T . As an example , given the TMs ( 1− 0.5x2 , [ −0.1 , 0.1 ] ) and ( x+ 0.1x4 , [ −0.2 , 0.2 ] ) over the domain x ∈ [ −1 , 1 ] . The order 4 TM for the sum is ( 1 + x − 0.5x2 + 0.1x4 , [ −0.3 , 0.3 ] ) , and the order 4 TM for the product is ( x− x3 + x4 , [ −0.38 , 0.38 ] ) . Although the technique of Taylor model flowpipe construction can be used to generate state-wise overapproximate flowpipes for ODEs ( see Berz & Makino ( 1998 ) ; Chen ( 2015 ) ) , it is still a challenge that how to compute and represent the set Ûj−1 by a TM which overapproximates the dependency from an initial state inX0 to the actual control input rangeUj−1 = { κ ( ϕN ( x0 , ( j−1 ) δc ) ) |x0 ∈ X0 } at the beginning of that control step . We illustrate this dependency in Figure 3 . Our approach aims at solving this problem . | The authors attack a significant problem or bounded-time reachability analysis of neural-network controlled systems (NNCSs). The method allows for verification of decision-making procedures applied by trained neural networks to systems with the known continuous model (continuous plants). The claimed main novelty of the paper is the application of the basic Taylor Model arithmetic in combination with the univariate Bernstein polynomial interpolation to handle non-differentiable NN activation functions. The algorithm is tested in a set of benchmarks introduced in the other related work. The main theoretical result states that the TM flow pipes computed using the POLAR approach are state-wise approximations for the reachable sets. | SP:abd06d2f55908d07477bc45cfd4f5bd474fb84c8 |
A Free Lunch from the Noise: Provable and Practical Exploration for Representation Learning | 1 INTRODUCTION . Reinforcement learning ( RL ) dedicates to solve the sequential decision making problem , where an agent is interacting with an unknown environment to find the best policy that maximizes the expected cumulative rewards ( Sutton & Barto , 2018 ) . It is known that the tabular algorithms direct controlling over the original state and action achieve the minimax-optimal regret depending on the cardinality of the state and action space ( Jaksch et al. , 2010 ; Osband & Van Roy , 2016 ; Azar et al. , 2017 ; Jin et al. , 2018 ) . However , these algorithms become intractable for the real-world problems with an enormous number of states , due to the curse of dimensionality . Learning with function approximation upon good representation is a natural idea to tackle the curse and serving as the key for the success of deep learning ( Bengio et al. , 2013 ) . In fact , representation learning lies at the heart of the empirical successes of deep RL in video games ( Mnih et al. , 2013 ) , robotics ( Levine et al. , 2016 ) , Go ( Silver et al. , 2017 ) , dialogue systems ( Jiang et al. , 2021 ) to name a few . Meanwhile , the importance and benefits of the representation in RL is rigorously justified ( Jin et al. , 2020 ; Yang & Wang , 2020 ) , which quantifies the regret in terms of the dimension of the known representation based on a subclass in Markov decision processes ( MDPs ) ( Puterman , 2014 ) . A natural question raises : How to design provably efficient and practical algorithm for representation learning in RL ? Here , by “ provably efficient ” we mean the sample complexity of the algorithm can be rigorously characterized only in terms of the complexity of representation class , without explicit dependency on the number of states and actions , while by “ practical ” we mean the algorithm can be implemented and deployed for the real-world applications . Therefore , we not only require the representation learned is expressive enough for handling complex practical environments , but also require the operations in the algorithm tractable and computation/memory efficient . The major difficulty of this question lies in two-fold : i ) The trade-off between the expressiveness and the tractability in the design of the representations ; ii ) The learning of representation is intimately coupled with exploration . Specifically , a desired representation should be sufficiently expressive1 to capture the dynamic system , while still tractable in practice . However , in general , expressive representation leads to complicated optimization in learning . For example , the representation in the linear MDP is exponential 1For a formal definition of expressiveness , see ( Agarwal et al. , 2020a ) . stronger than the latent variable MDPs in terms of expressiveness ( Agarwal et al. , 2020a ) . However , its representation learning is either depending on a MLE oracle that is computational intractable due to the constraint on the regularity of conditional density ( Agarwal et al. , 2020a ) , or a complicated constrained min-max-min-max optimization ( Modi et al. , 2021 ) . On the other hand , Misra et al . ( 2020 ) considers the representation introduced by an encoder in block MDP ( Du et al. , 2019 ) , in which the learning problem can be completed by a regression , but with the payoff that the representations in block MDP is even weaker than latent variable MDP ( Agarwal et al. , 2020a ) . Meanwhile , the coupling of the learning of representation and exploration induces the difficulty in practical algorithm design and analysis . One can not learn a precise representation without enough experiences from a comprehensive exploration , while the exploration depends on an reliable estimation of the representation . Most of the known results depends on a policy-cover-based exploration ( Du et al. , 2019 ; Misra et al. , 2020 ; Agarwal et al. , 2020a ; Modi et al. , 2021 ) , which maintains and samples a set of policies during training for systematic exploration , that significant increases the computation and memory cost in implementation . In this work , we propose Spectral Dynamics Embedding ( SPEDE ) , bypassing the aforementioned difficulties and answering the question affirmatively . SPEDE is established on an observation that connects the stochastic control dynamics ( Osband & Van Roy , 2014 ; Kakade et al. , 2020 ) with linear MDPs in Section 3 . Specifically , by exploiting the property of the noise in the stochastic control dynamics , we can recover the factorization of its corresponding Markov transition operator in closed-form without extra computation . This equivalency immediately overcome the computational intractability in the model estimation in Agarwal et al . ( 2020a ) , which breaks the trade-off between expressiveness and tractability . More importantly , the connection unifies two highly-related but commonly-known different models , i.e. , stochastic nonlinear control models ( Kakade et al. , 2020 ) and linear MDPs ( Jin et al. , 2020 ) , therefore , provides the opportunity to share benefits from both sides : i ) , it sheds the light on exploiting optimistic control for exploratory representation learning , instead of expensive policycover-based exploration ; and ii ) , it introduces the linear sufficient feature from the spectral space of Markov operator , in which the planning can be completed efficiently . We rigorously characterize the statistical property of SPEDE in terms of regret w.r.t . the complexity of representation class in Section 4 , without explicit dependence on the raw feature of state and action . With the established unified view , our results generalize online control ( Kakade et al. , 2020 ) and linear MDP ( Jin et al. , 2020 ) beyond known features . We finally demonstrate the superior of SPEDE on the MuJoCo benchmarks in Section 5 . It significantly outperforms the empirical stateof-the-art RL algorithms . To our knowledge , SPEDE is the first representation learning algorithm achieving statistical , computational , and memory efficiency with sufficient expressiveness . 1.1 RELATED WORK . There have been many great attempts to learn a variety of algorithmic representation learning in RL designed for different purposes , e.g. , bisimulation ( Ferns et al. , 2004 ; Gelada et al. , 2019 ; Castro , 2020 ) , reconstruction ( Watter et al. , 2015 ; Hafner et al. , 2019 ) . Recently , there are also several works considering the spectral features based on decomposing different variants of the transition operator , including successor features ( Dayan , 1993 ; Kulkarni et al. , 2016 ) , proto-value functions ( Mahadevan & Maggioni , 2007 ; Wu et al. , 2018 ) , spectral state-aggregation ( Duan et al. , 2018 ; Zhang & Wang , 2019 ) , and contrastive fourier features ( Nachum & Yang , 2021 ) . These works are highly-related to the proposed SPEDE . Besides these features focus on state-only representation , the major difference between SPEDE and these spectral features lies in i ) , the target operators in existing spectral features are state-state transition , which cancel the effect of action ; ii ) , the target operators are estimated based on empirical data from a fixed behavior policy under the implicit assumption that the estimated operator is uniformly accurate , ignoring the major difficulty in exploration , while SPEDE carefully designed the systematic exploration with theoretical guarantee ; iii ) , most of the existing spectral features rely on explicitly decomposion of the operators , while SPEDE obtains the spectral for free . Turning to the rigorously-justified representation learning , a large body of effort focuses on policy-cover-based methods in an “ explore-then-commit ” strategy ( Du et al. , 2019 ; Misra et al. , 2020 ; Agarwal et al. , 2020a ; Modi et al. , 2021 ) . These algorithms learn a uniformly accurate model/representation through a reward-free exploration , upon which decouple the learning from the exploration . Then , the representation from the learned model can be used for optimal policy seeking for the particular reward . The major difficulty impedes their practical application is the computa- tion and memory cost : the policy-cover-based exploration requires a set of exploratory polices to be maintained and sampled from during training , which can be extremely expensive . Another two related lines of research are model-based RL and online control , which are commonly known overlapped but separate communities considering different formulations of the dynamics . Our finding establishes the equivalency among the model , so that bridges these communities . Osband & Van Roy ( 2014 ) and Kakade et al . ( 2020 ) are the most related to our work in each community . These models generalize their corresponded linear models , i.e. , Jin et al . ( 2020 ) and Cohen et al . ( 2019 ) , with general nonlinear model and kernel function within a known RKHS , respectively . The regret of the optimstic ( pessimistic ) algorithm has been carefully characterized for these models . However , both of the proposed algorithms in Osband & Van Roy ( 2014 ) and Kakade et al . ( 2020 ) require an planning oracle to seek the optimal policy , which might be computational intractable . In SPEDE , this is easily handled in the equivalent linear MDP . 2 PRELIMINARIES . Markov Decision Process ( MDP ) is one of the most standard models studied in the reinforcement learning that can be denoted by the tuple M = ( S , A , r , T , µ , H ) , where S is the state space , A is the action space , r : S × A → R+ is the reward function2 ( where R+ denotes the set of nonnegative real numbers ) , T : S × A → ∆ ( S ) is the transition and µ is an initial state distribution and H is the horizon3 ( i.e . the length of each episode ) . A ( potentially non-stationary ) policy π can be defined as { πh } h∈ [ H ] where πh : S → A , ∀h ∈ [ H ] . Following the standard notation , we define the value function V πh ( sh ) : = ET , π [ ∑H−1 t=h r ( st , at ) |sh = s ] and the action-value function ( i.e . the Q function ) Qπh ( sh , ah ) = ET , π [ ∑H−1 t=h r ( st , at ) |sh = s , ah = a ] , which are the expected cumulative rewards under transition T when executing policy π starting from sh and ( sh , ah ) . With these two definitions at hand , it is straightforward to show the following Bellman equation : Qπh ( sh , ah ) = r ( sh , ah ) + Esh+1∼T ( ·|sh , ah ) V πh+1 ( sh+1 ) . Reinforcement learning aims at finding the optimal policy π∗ = arg maxπ Es∼µV π0 ( s ) . It is well known that in the tabular setting when the state space and action space are finite , we can provably identify the optimal policy with both sample-efficient and computational-efficient optimism-based methods ( e.g . Azar et al. , 2017 ; Jin et al. , 2018 ; Zhang et al. , 2021 ) with the complexity proportion to |S||A| . However , in practice , the cardinality of state and action space can be large or even infinite . Hence , we need to incorporate function approximation into the learning algorithm when we deal with such cases . The linear MDP ( Jin et al. , 2020 ) or low-rank MDP ( Agarwal et al. , 2020a ; Modi et al. , 2021 ) is the most well-known reinforcement learning model that can incorporate linear function approximation with theoretical guarantee , thanks to the following assumption on the transition : T ( s′|s , a ) = 〈φ ( s , a ) , µ ( s′ ) 〉H , ( 1 ) where φ : S × A → H , µ : S → H are two feature maps and H is a Hilbert space . The most essential observation for them is that , Qπh ( s , a ) for any policy π is linear w.r.t φ ( sh , ah ) , due to the following observation ( Jin et al. , 2020 ) : ∫ V πh+1 ( sh+1 ) T ( sh+1|sh , ah ) dsh+1 = 〈 φ ( sh , ah ) , ∫ V πh+1 ( sh+1 ) µ ( sh+1 ) dsh+1 〉 H . ( 2 ) Therefore , φ serves as a sufficient representation for the estimation of Qπh , that can provide uncertainty estimation with standard linear model analysis and eventually lead to sample-efficient learning when φ is fixed and known to the agent ( see Theorem 3.1 in Jin et al. , 2020 ) . However , we in general do not have such representation in advance4 and we need to learn the representation from the data , which constraints the applicability of the algorithms derived with fixed and known representation . | This paper studies the single reward episodic MDP problem with the model-based TS algorithm. Assuming the given model class satisfies realizability, regularization property, bounded Eluder dimension, low covering number, and the true dynamics is the stochastic control model under gaussian noise transition (eq 5), the authors show the polynomial Bayesian regret guarantee. In addition, experiments on OPENAI MuJoCo is conducted. | SP:8afa601f36fdecbcca39ca4b514c896cc73893cf |
A Free Lunch from the Noise: Provable and Practical Exploration for Representation Learning | 1 INTRODUCTION . Reinforcement learning ( RL ) dedicates to solve the sequential decision making problem , where an agent is interacting with an unknown environment to find the best policy that maximizes the expected cumulative rewards ( Sutton & Barto , 2018 ) . It is known that the tabular algorithms direct controlling over the original state and action achieve the minimax-optimal regret depending on the cardinality of the state and action space ( Jaksch et al. , 2010 ; Osband & Van Roy , 2016 ; Azar et al. , 2017 ; Jin et al. , 2018 ) . However , these algorithms become intractable for the real-world problems with an enormous number of states , due to the curse of dimensionality . Learning with function approximation upon good representation is a natural idea to tackle the curse and serving as the key for the success of deep learning ( Bengio et al. , 2013 ) . In fact , representation learning lies at the heart of the empirical successes of deep RL in video games ( Mnih et al. , 2013 ) , robotics ( Levine et al. , 2016 ) , Go ( Silver et al. , 2017 ) , dialogue systems ( Jiang et al. , 2021 ) to name a few . Meanwhile , the importance and benefits of the representation in RL is rigorously justified ( Jin et al. , 2020 ; Yang & Wang , 2020 ) , which quantifies the regret in terms of the dimension of the known representation based on a subclass in Markov decision processes ( MDPs ) ( Puterman , 2014 ) . A natural question raises : How to design provably efficient and practical algorithm for representation learning in RL ? Here , by “ provably efficient ” we mean the sample complexity of the algorithm can be rigorously characterized only in terms of the complexity of representation class , without explicit dependency on the number of states and actions , while by “ practical ” we mean the algorithm can be implemented and deployed for the real-world applications . Therefore , we not only require the representation learned is expressive enough for handling complex practical environments , but also require the operations in the algorithm tractable and computation/memory efficient . The major difficulty of this question lies in two-fold : i ) The trade-off between the expressiveness and the tractability in the design of the representations ; ii ) The learning of representation is intimately coupled with exploration . Specifically , a desired representation should be sufficiently expressive1 to capture the dynamic system , while still tractable in practice . However , in general , expressive representation leads to complicated optimization in learning . For example , the representation in the linear MDP is exponential 1For a formal definition of expressiveness , see ( Agarwal et al. , 2020a ) . stronger than the latent variable MDPs in terms of expressiveness ( Agarwal et al. , 2020a ) . However , its representation learning is either depending on a MLE oracle that is computational intractable due to the constraint on the regularity of conditional density ( Agarwal et al. , 2020a ) , or a complicated constrained min-max-min-max optimization ( Modi et al. , 2021 ) . On the other hand , Misra et al . ( 2020 ) considers the representation introduced by an encoder in block MDP ( Du et al. , 2019 ) , in which the learning problem can be completed by a regression , but with the payoff that the representations in block MDP is even weaker than latent variable MDP ( Agarwal et al. , 2020a ) . Meanwhile , the coupling of the learning of representation and exploration induces the difficulty in practical algorithm design and analysis . One can not learn a precise representation without enough experiences from a comprehensive exploration , while the exploration depends on an reliable estimation of the representation . Most of the known results depends on a policy-cover-based exploration ( Du et al. , 2019 ; Misra et al. , 2020 ; Agarwal et al. , 2020a ; Modi et al. , 2021 ) , which maintains and samples a set of policies during training for systematic exploration , that significant increases the computation and memory cost in implementation . In this work , we propose Spectral Dynamics Embedding ( SPEDE ) , bypassing the aforementioned difficulties and answering the question affirmatively . SPEDE is established on an observation that connects the stochastic control dynamics ( Osband & Van Roy , 2014 ; Kakade et al. , 2020 ) with linear MDPs in Section 3 . Specifically , by exploiting the property of the noise in the stochastic control dynamics , we can recover the factorization of its corresponding Markov transition operator in closed-form without extra computation . This equivalency immediately overcome the computational intractability in the model estimation in Agarwal et al . ( 2020a ) , which breaks the trade-off between expressiveness and tractability . More importantly , the connection unifies two highly-related but commonly-known different models , i.e. , stochastic nonlinear control models ( Kakade et al. , 2020 ) and linear MDPs ( Jin et al. , 2020 ) , therefore , provides the opportunity to share benefits from both sides : i ) , it sheds the light on exploiting optimistic control for exploratory representation learning , instead of expensive policycover-based exploration ; and ii ) , it introduces the linear sufficient feature from the spectral space of Markov operator , in which the planning can be completed efficiently . We rigorously characterize the statistical property of SPEDE in terms of regret w.r.t . the complexity of representation class in Section 4 , without explicit dependence on the raw feature of state and action . With the established unified view , our results generalize online control ( Kakade et al. , 2020 ) and linear MDP ( Jin et al. , 2020 ) beyond known features . We finally demonstrate the superior of SPEDE on the MuJoCo benchmarks in Section 5 . It significantly outperforms the empirical stateof-the-art RL algorithms . To our knowledge , SPEDE is the first representation learning algorithm achieving statistical , computational , and memory efficiency with sufficient expressiveness . 1.1 RELATED WORK . There have been many great attempts to learn a variety of algorithmic representation learning in RL designed for different purposes , e.g. , bisimulation ( Ferns et al. , 2004 ; Gelada et al. , 2019 ; Castro , 2020 ) , reconstruction ( Watter et al. , 2015 ; Hafner et al. , 2019 ) . Recently , there are also several works considering the spectral features based on decomposing different variants of the transition operator , including successor features ( Dayan , 1993 ; Kulkarni et al. , 2016 ) , proto-value functions ( Mahadevan & Maggioni , 2007 ; Wu et al. , 2018 ) , spectral state-aggregation ( Duan et al. , 2018 ; Zhang & Wang , 2019 ) , and contrastive fourier features ( Nachum & Yang , 2021 ) . These works are highly-related to the proposed SPEDE . Besides these features focus on state-only representation , the major difference between SPEDE and these spectral features lies in i ) , the target operators in existing spectral features are state-state transition , which cancel the effect of action ; ii ) , the target operators are estimated based on empirical data from a fixed behavior policy under the implicit assumption that the estimated operator is uniformly accurate , ignoring the major difficulty in exploration , while SPEDE carefully designed the systematic exploration with theoretical guarantee ; iii ) , most of the existing spectral features rely on explicitly decomposion of the operators , while SPEDE obtains the spectral for free . Turning to the rigorously-justified representation learning , a large body of effort focuses on policy-cover-based methods in an “ explore-then-commit ” strategy ( Du et al. , 2019 ; Misra et al. , 2020 ; Agarwal et al. , 2020a ; Modi et al. , 2021 ) . These algorithms learn a uniformly accurate model/representation through a reward-free exploration , upon which decouple the learning from the exploration . Then , the representation from the learned model can be used for optimal policy seeking for the particular reward . The major difficulty impedes their practical application is the computa- tion and memory cost : the policy-cover-based exploration requires a set of exploratory polices to be maintained and sampled from during training , which can be extremely expensive . Another two related lines of research are model-based RL and online control , which are commonly known overlapped but separate communities considering different formulations of the dynamics . Our finding establishes the equivalency among the model , so that bridges these communities . Osband & Van Roy ( 2014 ) and Kakade et al . ( 2020 ) are the most related to our work in each community . These models generalize their corresponded linear models , i.e. , Jin et al . ( 2020 ) and Cohen et al . ( 2019 ) , with general nonlinear model and kernel function within a known RKHS , respectively . The regret of the optimstic ( pessimistic ) algorithm has been carefully characterized for these models . However , both of the proposed algorithms in Osband & Van Roy ( 2014 ) and Kakade et al . ( 2020 ) require an planning oracle to seek the optimal policy , which might be computational intractable . In SPEDE , this is easily handled in the equivalent linear MDP . 2 PRELIMINARIES . Markov Decision Process ( MDP ) is one of the most standard models studied in the reinforcement learning that can be denoted by the tuple M = ( S , A , r , T , µ , H ) , where S is the state space , A is the action space , r : S × A → R+ is the reward function2 ( where R+ denotes the set of nonnegative real numbers ) , T : S × A → ∆ ( S ) is the transition and µ is an initial state distribution and H is the horizon3 ( i.e . the length of each episode ) . A ( potentially non-stationary ) policy π can be defined as { πh } h∈ [ H ] where πh : S → A , ∀h ∈ [ H ] . Following the standard notation , we define the value function V πh ( sh ) : = ET , π [ ∑H−1 t=h r ( st , at ) |sh = s ] and the action-value function ( i.e . the Q function ) Qπh ( sh , ah ) = ET , π [ ∑H−1 t=h r ( st , at ) |sh = s , ah = a ] , which are the expected cumulative rewards under transition T when executing policy π starting from sh and ( sh , ah ) . With these two definitions at hand , it is straightforward to show the following Bellman equation : Qπh ( sh , ah ) = r ( sh , ah ) + Esh+1∼T ( ·|sh , ah ) V πh+1 ( sh+1 ) . Reinforcement learning aims at finding the optimal policy π∗ = arg maxπ Es∼µV π0 ( s ) . It is well known that in the tabular setting when the state space and action space are finite , we can provably identify the optimal policy with both sample-efficient and computational-efficient optimism-based methods ( e.g . Azar et al. , 2017 ; Jin et al. , 2018 ; Zhang et al. , 2021 ) with the complexity proportion to |S||A| . However , in practice , the cardinality of state and action space can be large or even infinite . Hence , we need to incorporate function approximation into the learning algorithm when we deal with such cases . The linear MDP ( Jin et al. , 2020 ) or low-rank MDP ( Agarwal et al. , 2020a ; Modi et al. , 2021 ) is the most well-known reinforcement learning model that can incorporate linear function approximation with theoretical guarantee , thanks to the following assumption on the transition : T ( s′|s , a ) = 〈φ ( s , a ) , µ ( s′ ) 〉H , ( 1 ) where φ : S × A → H , µ : S → H are two feature maps and H is a Hilbert space . The most essential observation for them is that , Qπh ( s , a ) for any policy π is linear w.r.t φ ( sh , ah ) , due to the following observation ( Jin et al. , 2020 ) : ∫ V πh+1 ( sh+1 ) T ( sh+1|sh , ah ) dsh+1 = 〈 φ ( sh , ah ) , ∫ V πh+1 ( sh+1 ) µ ( sh+1 ) dsh+1 〉 H . ( 2 ) Therefore , φ serves as a sufficient representation for the estimation of Qπh , that can provide uncertainty estimation with standard linear model analysis and eventually lead to sample-efficient learning when φ is fixed and known to the agent ( see Theorem 3.1 in Jin et al. , 2020 ) . However , we in general do not have such representation in advance4 and we need to learn the representation from the data , which constraints the applicability of the algorithms derived with fixed and known representation . | This paper studies the problem of learning representations for RL. On the theory side, the paper considers the setting where the state-transition is a nonlinear function of the past state-action plus additive noise, and develops a no-regret algorithm. On the empirical side, the paper shows that an adaptation of this algorithm on real-world RL tasks could perform better than existing model-based algorithms. | SP:8afa601f36fdecbcca39ca4b514c896cc73893cf |
MetaMorph: Learning Universal Controllers with Transformers | Multiple domains like vision , natural language , and audio are witnessing tremen-1 dous progress by leveraging Transformers for large scale pre-training followed by2 task specific fine tuning . In contrast , in robotics we primarily train a single robot3 for a single task . However , modular robot systems now allow for the flexible com-4 bination of general-purpose building blocks into task optimized morphologies.5 However , given the exponentially large number of possible robot morphologies,6 training a controller for each new design is impractical . In this work , we pro-7 pose MetaMorph , a Transformer based approach to learn a universal controller8 over a modular robot design space . MetaMorph is based on the insight that robot9 morphology is just another modality on which we can condition the output of a10 Transformer . Through extensive experiments we demonstrate that large scale pre-11 training on a variety of robot morphologies results in policies with combinato-12 rial generalization capabilities , including zero shot generalization to unseen robot13 morphologies . We further demonstrate that our pre-trained policy can be used for14 sample-efficient transfer to completely new robot morphologies and tasks.15 1 INTRODUCTION16 The field of embodied intelligence posits that intelligent behaviours can be rapidly learned by agents17 whose morphologies are well adapted to their environment ( Brooks , 1991 ; Pfeifer & Scheier , 2001 ; 18 Bongard , 2014 ; Gupta et al. , 2021 ) . Based on this insight , a robot designer is faced with a predica-19 ment : should the robot design be task specific or general ? However , the sample inefficiency of20 tabula rasa deep reinforcement learning and the challenge of designing a single robot which can21 perform a wide variety of tasks has led to the current dominant paradigm of ‘ one robot one task ’ . In22 stark contrast , domains like vision ( Girshick et al. , 2014 ; He et al. , 2020 ) and language ( Dai & Le,23 2015 ; Radford et al. , 2018 ) , which are not plagued by the challenges of physical embodiment , have24 witnessed tremendous progress especially by leveraging large scale pre-training followed by trans-25 fer learning to many tasks through limited task-specific fine-tuning . Moreover , multiple domains26 are witnessing a confluence , with domain specific architectures being replaced by Transformers27 ( Vaswani et al. , 2017 ) , a general-purpose architecture with no domain-specific inductive biases.28 How can we bring to bear the advances in large-scale pre-training , transfer learning and general-29 purpose Transformer architectures , to the field of robotics ? We believe that modular robot systems30 provide a natural opportunity by affording the flexibility of combining a small set of general-purpose31 building blocks into a task-optimized morphology . Indeed , modularity at the level of hardware is a32 motif which is extensively utilized by evolution in biological systems ( Hartwell et al. , 1999 ; Kashtan33 & Alon , 2005 ) and by humans in many modern engineered systems . However , prior works ( Wang34 et al. , 2018 ; Chen et al. , 2018 ; Sanchez-Gonzalez et al. , 2018 ) on learning policies that can generalize35 across different robot morphologies have been limited to : ( 1 ) manually constructed variations of a36 single or few base morphologies , i.e . little diversity in the kinematic structure ; ( 2 ) low complexity37 of control ( ≤ 7 degrees of freedom ) ; ( 3 ) using Graph Neural Networks ( Scarselli et al. , 2008 ) based38 on the assumption that kinematic structure of the robot is the correct inductive bias.39 In this work , we take a step towards a more challenging setting ( Fig . 1 ) of learning a universal40 controller for a modular robot design space which has the following properties : ( a ) generalization41 to unseen variations in dynamics ( e.g . joint damping , armature , module mass ) and kinematics ( e.g.42 degree of freedom , morphology , module shape parameters ) and ( b ) sample-efficient transfer to new43 morphologies and tasks . We instantiate the exploration of this general setting in the UNIMAL design44 space introduced by Gupta et al . ( 2021 ) . We choose the UNIMAL design space as it contains a45 challenging ( 15−20 DoFs ) distribution of robots that can learn locomotion and mobile manipulation46 in complex stochastic environments . Learning a single universal controller for a huge variety of47 robot morphologies is difficult due to : ( 1 ) differences in action space , sensory input , morphology,48 dynamics , etc . ( 2 ) given a modular design space , not all robots are equally adept at learning a task,49 e.g . some robots might inherently be less sample-efficient ( Gupta et al. , 2021 ) .50 To this end , we propose MetaMorph , a method to learn a universal controller for a modular robot51 design space . MetaMorph is based on the insight that robot morphology is just another modal-52 ity on which we can condition the output of a Transformer . MetaMorph tackles the challenge of53 differences in embodiment by leveraging a Transformer based architecture which takes as input a54 sequence of tokens corresponding to the number of modules in the robot . Each input token is created55 by combining proprioceptive and morphology information at the level of constituent modules . The56 combination of proprioceptive and embodiment modalities and large scale joint pre-training leads57 to policies which exhibit zero-shot generalization to unseen variations in dynamics and kinemat-58 ics parameters and sample-efficient transfer to new morphologies and tasks . Finally , to tackle the59 differences in learning speeds of different robots , we propose dynamic replay buffer balancing to60 dynamically balance the amount of experience collection for a robot based on its performance.61 In sum , our key contributions are : ( 1 ) we introduce MetaMorph to learn a universal controller for a62 modular design space consisting of robots with high control complexity for challenging 3D locomo-63 tion tasks in stochastic environments ; ( 2 ) we showcase that our learned policy is able to zero-shot64 generalize to unseen variations in dynamics , kinematics , new morphologies and tasks , which is par-65 ticularly useful in real-world settings where controllers need to be robust to hardware failures ; ( 3 ) 66 we analyze the learned attention mask and discover the emergence of motor synergies ( Bernstein,67 1966 ) , which partially explains how MetaMorph is able to control a large number of robots.68 2 RELATED WORK69 Prior works on learning control policies which can generalize across robot morphologies have pri-70 marily focused on parametric variations of a single ( Chen et al. , 2018 ) or few ( 2 − 3 ) robot types71 ( Wang et al. , 2018 ; Sanchez-Gonzalez et al. , 2018 ; Huang et al. , 2020 ; Kurin et al. , 2021 ) . For gener-72 alizing across parametric variations of a single morphology , various approaches have been proposed73 like using a learned hardware embedding ( Chen et al. , 2018 ) , meta-learning for policy adaptation74 ( Al-Shedivat et al. , 2017 ; Ghadirzadeh et al. , 2021 ) , kinematics randomization ( Exarchos et al.,75 2020 ) , and dynamics randomization ( Peng et al. , 2018 ) . In case of multiple different morpholo-76 gies , one approach to tackle the challenge of differences in action and state spaces is to leverage77 Graph Neural Networks ( Scarselli et al. , 2008 ; Kipf & Welling , 2017 ; Battaglia et al. , 2018 ) . Wang78 et al . ( 2018 ) ; Huang et al . ( 2020 ) use GNNs to learn joint controllers for planar agents ( ≤ 7 DoFs ) .79 Blake et al . ( 2021 ) propose freezing selected parts of networks to enable training GNNs for a single80 morphology but with higher control complexity . The usage of GNNs is based on the assumption81 that the robot morphology is a good inductive bias to incorporate into neural controllers , which can82 be naturally modelled by GNNs . Recently , Kurin et al . ( 2021 ) also proposed using Transformers83 for training planar agents . Our work differs from Kurin et al . ( 2021 ) in the diversity and scale of84 training robots , complexity of the environments , conditioning the Transformer on morphological85 information , and showcasing strong generalization to unseen morphologies and tasks ( see § B.1 ) .86 Another closely related line of work is the design of modular robot design spaces and developing al-87 gorithms for co-optimizing morphology and control ( Sims , 1994 ) within a design space to find task-88 optimized combinations of controller and robot morphology . When the control complexity is low,89 evolutionary strategies have been successfully applied to find diverse morphologies in expressive90 soft robot design spaces ( Cheney et al. , 2014 ; 2018 ) . In the case of rigid bodies , Ha ( 2019 ) ; Schaff91 et al . ( 2019 ) ; Liao et al . ( 2019 ) have proposed using RL for finding optimal module parameters of92 fixed hand-designed morphology for rigid body robots . For more expressive design spaces , GNNs93 have been leveraged to share controller parameters ( Wang et al. , 2019 ) across generations or develop94 novel heuristic search methods for efficient exploration of the design space ( Zhao et al. , 2020 ) . In95 contrast to task specific morphology optimization , III et al . ( 2021 ) propose evolving morphologies96 without any task or reward specification . Finally , for self reconfigurable modular robots ( Fukuda & 97 Nakagawa , 1988 ; Yim et al. , 2007 ) , modular control has been utilized in both real ( Rubenstein et al.,98 2014 ; Mathews et al. , 2017 ) and simulated ( Pathak et al. , 2019 ) systems.99 3 LEARNING A UNIVERSAL CONTROLLER100 We begin by reviewing the UNIMAL design space and formulating the problem of learning a uni-101 versal controller for a modular robot design space as a multi-task reinforcement learning problem.102 3.1 THE UNIMAL DESIGN SPACE103 An agent morphology can be naturally represented as a kinematic tree , or a directed acyclic graph,104 corresponding to a hierarchy of articulated 3D rigid parts connected via motor actuated hinge joints.105 The graph G : = ( V , E ) consists of vertices V = { v1 , ... , vn } corresponding to modules of the design106 space , and edges eij ∈ E corresponding to joints between vi and vj . Concretely , in the UNIMAL107 ( Gupta et al. , 2021 ) design space , each node represents a component which can be one of two types:108 ( 1 ) a sphere parameterized by radius and density to represent the head of the agent and form the root109 of the tree ; ( 2 ) cylinders parameterized by length , radius , and density to represent the limbs of the110 robot . Two nodes of the graph can be connected via at most two motor-actuated hinge joints ( i.e . G111 is a multi-graph ) , parameterized by joint axis , joint limits and a motor gear ratio.112 3.2 JOINT POLICY OPTIMIZATION113 The problem of learning a universal controller for a set of K robots drawn from a modular robot de-114 sign space is a multi-task RL problem . Specifically , the control problem for each robot is an infinite-115 horizon discounted Markov decision process ( MDP ) represented by a tuple ( S , A , T , R , H , γ ) ,116 where S represents the set of states , A represents the set of available actions , T ( st+1|st , at ) rep-117 resents the transition dynamics , R ( s , a ) is a reward function , H is the horizon and γ is the discount118 factor . At each time step , the robot k receives an observation skt , takes an action a k t , and is given a119 reward rkt . A policy πθ ( a k t |skt ) models the conditional distribution over action akt ∈ A given state120 skt ∈ S. The goal is to find policy parameters θ which maximize the average expected return across121 all tasks : R = 1K ∑K k=0 ∑H t=0 γ trkt . We use Proximal Policy Optimization ( PPO ) ( Schulman et al.,122 2017 ) , a popular policy gradient ( Williams , 1992 ) method for optimizing this objective.123 4 METAMORPH124 Progress in model-free reinforcement learning algorithms has made it possible to train locomotion125 policies for complex high-dimensional agents from scratch , albeit with tedious hyperparamter tun-126 ing . However , this approach is not suitable for modular design spaces containing exponentially many127 robot morphologies . Indeed , Gupta et al . ( 2021 ) estimates that the UNIMAL design space contains128 more than 1018 robots . Hence , learning a separate policy for each robot is infeasible . However , the129 modular nature of the design space implies that while each robot morphology is unique , it is still130 constructed from the same set of modules and potentially shares subgraphs of the kinematic tree131 with other morphologies . We describe how MetaMorph exploits this structure to meet the challenge132 of learning a universal controller for different morphologies.133 4.1 FUSING PROPRIOCEPTIVE STATES AND MORPHOLOGY REPRESENTATIONS134 To learn policies that can generalize across morphologies , we must encode not only proprioceptive135 states essential for controlling a single robot , but also morphological information . From a multi-task136 RL perspective , this information can be viewed as a task identifier , where each task corresponds to137 a different robot morphology , all drawn from the same modular design space . Hence , instead of138 learning a policy which is agnostic to the robot morphology , we need to learn a policy conditioned139 on the robot morphology . Consequently , at each time step t ( we drop the time subscript for brevity ) ,140 the robot receives an observation sk = ( skm , s k p , s k g ) which is composed of the morphology repre-141 sentation ( skm ) , the proprioceptive states ( s k p ) , and additional global sensory information ( s k g ) . See142 § A.1 for a detailed description of each observation type.143 4.2 MORPHOLOGY AWARE TRANSFORMER144 The robot chooses its action via a stochastic policy πθ ( akt |skt ) where θ are the parameters of a pair145 of deep neural networks : a policy network that produces an action distribution ( Fig . 2 ) , and a critic146 network that predicts discounted future returns . We use Transformers ( Vaswani et al. , 2017 ) to147 parametrize both policy and critic networks as described in detail below.148 Encode . We make a distinction between how we process local and global state information . Concretely , let sk = ( skl , s k g ) where s k l = ( s k m , s k p ) . Since the number of joints between two modules can vary , we zero pad skli to ensure that input observation vectors are of the same size , i.e . s k l ∈ RN×M . In order to provide an arbitrary robot morphology as input to the Transformer , we first create a 1D sequence of local observation vectors by traversing the kinematic tree in depth first order starting at the root node ( torso in case of the UNIMAL design space ) . We then apply a single layer MLP independently to skli to create a D dimensional module embedding . We also add learnable 1D position embeddings to the module embeddings to automatically learn positional information : m0 = [ φ ( s k l1 ; We ) ; · · · ; φ ( s k lN ; We ) ] + Wpos , ( 1 ) where φ ( · ) is the embedding function , We ∈ RM×D are the embedding weights , and Wpos ∈149 RN×D are the learned positional embeddings . Note that in practice we zero pad the input sequence150 of local observation vectors for efficient batch processing of multiple morphologies.151 Process . From the module embeddings described above , we obtain the output feature vectors as : m′ ` = MSA ( LN ( m ` −1 ) ) + m ` −1 , ` = 1 . . . L ( 2 ) m ` = MLP ( LN ( m ′ ` ) ) + m ′ ` , ` = 1 . . . L ( 3 ) where MSA is multi-head attention ( Vaswani et al. , 2017 ) and LN is Layernorm ( Lei Ba et al. , 2016 ) .152 Decode . We integrate the global state information skg consisting of high-dimensional sensory input from camera or depth sensors . Naively concatenating skg and s k l in the encoder has two downsides : ( 1 ) it dilutes the importance of low-dimensional local sensory and morphological information ; ( 2 ) it increases the number of Transformer parameters due to an increase in the dimensionality of the input embedding ( D ) . Instead , we obtain the outputs of policy network as follows : g = γ ( skg ; Wg ) , µ ( s k ) = φ ( mli , g ; Wd ) , πθ ( a k|sk ) = N ( { µ ( ski ) } Ni , Σ ) , ( 4 ) where γ ( · ) is a 2-layer MLP with parameters Wg , φ ( · ) is an embedding function with Wd as the153 embedding weights . The action distribution is modeled as a Gaussian distribution with a state-154 dependent mean µ ( ski ) and a fixed diagonal covariance matrix Σ . Similarly , for the critic network,155 we estimate value for the whole morphology by averaging the value per limb.156 4.3 DYNAMIC REPLAY BUFFER BALANCING157 Joint training of diverse morphologies is challenging as different morphologies are adept at per-158 forming different tasks . Consequently , some morphologies might be inherently better suited for the159 pre-training task . Let us consider two robots : ( A ) Robot A locomotes in a falling forward manner,160 i.e. , robot A is not passively stable ; ( B ) Robot B is passively stable . Especially with early termina-161 tion , robot A will keep falling during the initial phases of training , which results in shorter episode162 lengths , whereas robot B will have longer episode lengths . Hence , more data will be collected for163 robot B in the earlier phases of training , and in turn will lead to robot B learning even faster , thereby164 resulting in a ‘ rich gets richer ’ training dynamic . However , our goal is to ensure that all morpholo-165 gies have a similar level of performance at the end of training as we want to generalize across the166 entire distribution of morphologies.167 To address this issue , we propose a simple dynamic replay buffer balancing scheme . On-policy algorithms like PPO ( Schulman et al. , 2017 ) proceed in iterations that alternate between experience collection and policy parameter update . Let Ek be any performance metric of choice , e.g . normalized reward , episode length , success ratio , etc . Let τ be the training iteration number . At each iteration , we sample the kth robot with probability Pk , given by : Pk = Eβk∑ Eβi Eτk = αEτk + ( 1− α ) E ( τ−1 ) k , ( 5 ) where α ∈ [ 0 , 1 ] is the discount factor and the exponent β determines the degree of dynamic pri-168 oritization , with β = 0 corresponding to the uniform case . In practice , we use episode length as169 our performance metric . We determine Pk by replacing Ei with 1000Ei , where the numerator is the170 maximum episode length.171 5 EXPERIMENTS172 In this section , we evaluate our method MetaMorph in different environments , perform extensive ab-173 lation studies of different design choices , test zero-shot generalization to variations in dynamics and174 kinematics parameters , and demonstrate sample efficient transfer to new morphologies and tasks.175 For qualitative results , please refer to the video on our project website 1.176 1https : //metamorph-iclr.github.io/site/ 5.1 EXPERIMENTAL SETUP177 We create a training set of 100 robots from the UNIMAL design space ( Gupta et al. , 2021 ) ( see178 § A.2 ) . We evaluate MetaMorph on three different environments ( Fig . 3 ) using the MuJoCo sim-179 ulator ( Todorov et al. , 2012 ) . In all 3 environments the goal of the agent is to maximize forward180 displacement over the course of an episode which lasts 1000 timesteps . The 3 environments are : ( 1 ) 181 Flat terrain ( FT ) ; ( 2 ) Variable terrain ( VT ) : VT is an extremely challenging environment as during182 each episode a new terrain is generated by randomly sampling a sequence of terrain types and inter-183 leaving them with flat terrain . We consider 3 types of terrains : hills , steps , and rubble ; ( 3 ) Obstacles:184 cuboid shaped obstacles of varying sizes on flat terrain.185 We use a dense morphology independent reward function for all our tasks as it is not feasible to186 design a reward function tailored to each morphology . In all tasks , our reward function promotes187 forward movement using small joint torques ( the latter obtained via a small energy usage penalty ) .188 In addition , as described in §4.3 , we use early termination across all environments when we detect a189 fall ( i.e . if the torso height drops below 50 % ( FT , Obstacles ) or 30 % ( VT ) of its initial height ) .190 5.2 BASELINES AND ABLATIONS191 Baselines : We compare against the following baselines:192 ( 1 ) GNN : We modify the NerveNet model proposed by193 Wang et al . ( 2018 ) to learn control policies for complex194 3D morphologies with variable number of joints . Specif-195 ically , we replace how the NerveNet model receives in-196 put and produces output by our encode and decode steps197 respectively ( §4.2 ) . In addition , the model receives the198 same observation as MetaMorph i.e . sk = ( skm , s k p , s k g ) 199 and is trained with dynamic replay buffer sampling . Thus,200 the only difference is in the process step . This helps test if201 the domain-specific inductive bias of the robot kinematic202 tree in the GNN is necessary.203 ( 2 ) MLP : We train all 100 robots separately with a 2-layer204 MLP and report the average performance . This baseline205 serves as an approximate upper bound for our method.206 Ablations : We also do an ablation study of different de-207 sign choices involved in our method . We refer our full208 method as MetaMorph and consider the following abla-209 tions : ( 1 ) MetaMorph-NPE : no learned position embeddings ; ( 2 ) MetaMorph-NM : we only provide210 sk = ( skp , s k g ) as inputs , i.e . the model does not have access to information about the robot morphol-211 ogy.212 Fig . 4 shows learning curves across 3 environments for training 100 morphologies . In all environ-213 ments MetaMorph can successfully match the average reward achieved via the per morphology MLP214 baseline on both FT , and obstacle environment . While MetaMorph performance in VT is slightly215 below the MLP baseline , we note that it has not saturated and we stopped training at 108 iterations216 across all three environments . Moreover , MetaMorph is significantly more sample-efficient ( 5× ) 217 than training independent MLPs ( 5 × 106 iterations per robot ) . The GNN baseline saturates at a218 level 2 to 3 times below MetaMorph . In GNN based models , locality and neighborhood connectiv-219 ity is explicitly baked into the model . Interestingly , just like ViT ( Dosovitskiy et al. , 2021 ) sparingly220 utilizes the 2D neighborhood structure of images at the beginning of the model by cutting the image221 into patches , MetaMorph uses the graph structure of the robot in the beginning by creating a 1D222 sequence corresponding to the kinematic tree by traversing the graph in depth first order . More-223 over , the position embeddings carry no information about the graph structure and are learned from224 scratch . We highlight that the learned position embeddings significantly improve the performance225 of MetaMorph , just as they do in Transformer based image classification . Finally , without access226 to the morphological information , MetaMorph-NM fails to learn a policy that can control diverse227 robot morphologies . All of this substantiates our central claim that morphological state information228 is necessary to learn successful control policies , although the kinematic graph need not be explicitly229 baked into neural architectures to learn policies capable of controlling diverse robot morphologies.230 Finally , we test the importance of dynamic replay buffer balancing in Fig . 5 , and find that balancing231 is necessary to learn a good control policy in 10− 15 % of robots across all 3 environments.232 5.3 ZERO-SHOT GENERALIZATION233 Our focus in this work is to learn policies that can generalize to unseen robots drawn from a modular234 robot design space . In this section , we demonstrate that MetaMorph shows favorable generalization235 properties across many different kinematic and dynamic variations.236 Experimental Setup . For each of the 100 training robots , we create a dedicated test set to test zero-237 shot transfer performance across two types of variations : dynamics ( armature , density , damping , and238 motor gear ) and kinematics ( module shape parameters like radius , and length of cylindrical limbs,239 and joint angles ) . For each training robot , we randomly create 4 different variants for each property,240 i.e . 400 robots with armature variations , and so on . While creating a new variant , we change the241 0 2 4 6 50 100 150 5e w ar d ( sFaSe 0 1 2 3 4 0 500 1000 1500 2000 2EstaFOes ( FyOinders ) ) inetune 6FratFh 0 . 0.2 0.4 0.6 0.8 1.0 IteratiRns ( ×107 ) 0.00 0.25 0.50 0.75 1.0 Obstacles ( cylinders ) Escape 2x3x 0 2 4 6 50 100 150 5e w ar d ( sFaSe 0 1 2 3 4 0 500 1000 1500 2000 2EstaFOes ( FyOinders ) ) inetune 6FratFh 0 . 0.2 0.4 0.6 0.8 1.0 IteratiRns ( ×107 ) 0.00 0.25 0.50 0.75 1.0 0 2 4 6 50 100 150 5e w ar d ( sFaSe 0 1 2 3 4 0 500 1000 1500 2000 2EstaFOes ( FyOinders ) ) inetune 6FratFh 0 . 0.2 0.4 0.6 0.8 1.0 IteratiRns ( ×107 ) 0.00 0.25 0.50 0.75 1.0 Figure 8 : Fine tuning : New robot morphologies and tasks . Left : Test environments . Right : Comparison of reward progression of 100 test robots averaged over 3 runs for pre-trained MetaMorph ( VT→ Escape , Obstacles→ Obstacles ( cylinders ) ) vs from scratch . Shaded regions denotes standard deviation . Across all environments pre-training leads to strong zero-shot performance and 2− 3× savings in training iterations to achieve the same level of average reward . relevant property of all modules or joints . See Table 2 for sampling ranges . We then compare242 zero-shot performance averaged over 10 trials.243 Generalization : Dynamics . First , we consider generalization to different dynamics ( Fig . 6 ) . We244 find consistently that MetaMorph performs significantly better than MetaMorph-NM and GNN245 across all types of dynamic variations and all environments . In fact , the difference is more pro-246 nounced for harder tasks like VT and Obstacles . We note that this result is surprising as we do not247 do dynamics randomization during training , i.e. , all robots in the training set have the same armature248 and damping parameters . Despite this , we see strong generalization performance.249 Generalization : Kinematics . Next , we consider generalization to different kinematics parameters250 ( Fig . 6 ) . This is a significantly more challenging setting as the model has to generalize to unseen251 variations in module shape parameters and changes to joint angle ranges . In fact , changes to joint252 angles can significantly alter the range of possible motion and might necessitate a different gait253 for successful locomotion . Consequently , we find that even though MetaMorph exhibits strong254 generalization performance compared to MetaMorph-NM and GNN in all 3 environments , there255 is indeed a performance drop for the challenging setting of variations in joint angles . However , the256 zero-shot performance is encouraging and motivates our next set of experiments on transfer learning.257 258 5.4 SAMPLE EFFICIENT TRANSFER LEARNING259 Experimental Setup . Here we create 100 robots from the UNIMAL design space which were260 not part of the training set , and we demonstrate that MetaMorph shows favorable sample efficient261 transfer to unseen morphologies , and even unseen morphologies performing novel tasks.262 Different morphologies . We first consider sample efficient learning of controllers for new mor-263 phologies on the same task by taking a pre-trained MetaMorph model on an environment , and then264 fine tuning it to learn to control morphologies in the test set . In Fig . 7 we compare the number of265 training iterations required to achieve a particular performance level when we fine tune MetaMorph266 vs training from scratch on the test set . Across all 3 environments we not only find strong zero-shot267 performance , but also 2 to 3 times higher sample efficiency compared to training from scratch.268 Different morphologies and novel tasks . Finally , we consider the most realistic and general setting269 mimicking the promise of modular robots , where we are faced with a novel task and want to use a270 new robot morphology which may be suited for this task . We consider the same set of 100 test271 robots on two new tasks ( Fig . 8 ) : ( 1 ) Escape : The agent starts at the center of a bowl shaped terrain272 surrounded by small hills ( bumps ) , and has to maximize the geodesic distance from the start location273 ( escape the hilly region ) . ( 2 ) Obstacles ( cylinders ) : cylinder shaped obstacles of varying sizes ( the274 size distribution is also different from the train task i.e . cuboid shapes ) . We transfer the learned275 policy from VT and Obstacles to Escape and Obstacles-Cylinders respectively . Again , we find that276 there is a strong zero-shot performance across all 3 environments and fine-tuning is 2 to 3 times277 more sample efficient than training from scratch.278 5.5 EMERGENT MOTOR SYNERGIES279 We next search for a potential explanation for how MetaMorph can coordinate the large number of280 DoFs ( ∼ 16 × 100 ) across several agents . Our hypothesis is inspired by Bernstein ( 1966 ) , which281 proposed the existence of muscle synergies as a neural strategy to simplify the control of multiple282 DoFs by the central nervous system . A muscle synergy corresponds to the constrained movements of283 sets of joints ( or muscles ) through co-activation by a single neural signal . Such synergies obviate the284 need to control all joints independently , by coupling sets of joints into adaptive functional groups.285 Although the definition of synergies ( Bruton & O ’ Dwyer , 2018 ) vary in the literature , dimension-286 ality reduction is generally accepted as a signature of synergistic control ( Todorov & Ghahramani,287 2004 ; Todorov , 2004 ) . Consequently , to test this hypothesis , in Fig . 9 we plot the stable rank of288 the attention matrix . For attention matrix Al ∈ Rm×m for layer l , the stable rank is defined as:289 sr ( Al ) = ‖Al‖2F ‖Al‖2 = ∑ σ2i σ2max , where σi are the singular values of Al . We note that sr ( Al ) is small290 and oscillates between two values which correspond to attention maps where groups of limbs are291 activated simultaneously ( denoted by dark columns ) , a characteristic signature of motor synergies.292 Hence , MetaMorph simplifies the control problem by learning to activate different motor synergies293 depending on both skm and s k p .294 6 CONCLUSION295 In this work , we explored how we can learn a universal controller for a large modular robot design296 space . To this end , we proposed MetaMorph , a Transformer approach based on the insight that robot297 morphology is just another modality on which we can condition the output of a Transformer . We298 showcased that pre-training on a large collection of diverse morphologies leads to policies which can299 generalize to unseen variations in kinematics , dynamics , new morphologies , and tasks . We hope that300 our work serves as a step towards realizing the potential of large-scale pre-training and fine-tuning301 in the field of robotics , a paradigm that has seen tremendous success in vision and language.302 REFERENCES303 Maruan Al-Shedivat , Trapit Bansal , Yuri Burda , Ilya Sutskever , Igor Mordatch , and Pieter Abbeel.304 Continuous adaptation via meta-learning in nonstationary and competitive environments . arXiv305 preprint arXiv:1710.03641 , 2017.306 Peter W Battaglia , Jessica B Hamrick , Victor Bapst , Alvaro Sanchez-Gonzalez , Vinicius Zambaldi,307 Mateusz Malinowski , Andrea Tacchetti , David Raposo , Adam Santoro , Ryan Faulkner , et al.308 Relational inductive biases , deep learning , and graph networks . arXiv preprint arXiv:1806.01261,309 2018.310 Nikolai Bernstein . The co-ordination and regulation of movements . The co-ordination and regula-311 tion of movements , 1966.312 Charlie Blake , Vitaly Kurin , Maximilian Igl , and Shimon Whiteson . Snowflake : Scaling gnns to313 high-dimensional continuous control via parameter freezing . arXiv preprint arXiv:2103.01009,314 2021.315 Josh Bongard . Why morphology matters . The horizons of evolutionary robotics , 6:125–152 , 2014.316 Rodney A Brooks . New approaches to robotics . Science , 253 ( 5025 ) :1227–1232 , 1991.317 Michaela Bruton and Nicholas O ’ Dwyer . Synergies in coordination : A comprehensive overview318 of neural , computational , and behavioral approaches . Journal of Neurophysiology , 120 ( 6 ) :2761–319 2774 , 2018.320 Tao Chen , Adithyavairavan Murali , and Abhinav Gupta . Hardware conditioned policies for multi-321 robot transfer learning . In NIPS , 2018.322 Nick Cheney , Robert MacCurdy , Jeff Clune , and Hod Lipson . Unshackling evolution : Evolving323 soft robots with multiple materials and a powerful generative encoding . SIGEVOlution , 7 ( 1 ) :324 11–23 , August 2014. doi : 10.1145/2661735.2661737 . URL https : //doi.org/10.1145/325 2661735.2661737.326 Nick Cheney , Josh Bongard , Vytas SunSpiral , and Hod Lipson . Scalable co-optimization of mor-327 phology and control in embodied machines . Journal of The Royal Society Interface , 15 ( 143 ) :328 20170937 , 2018.329 Andrew M Dai and Quoc V Le . Semi-supervised sequence learning . Advances in neural information330 processing systems , 28:3079–3087 , 2015.331 Alexey Dosovitskiy , Lucas Beyer , Alexander Kolesnikov , Dirk Weissenborn , Xiaohua Zhai , Thomas332 Unterthiner , Mostafa Dehghani , Matthias Minderer , Georg Heigold , Sylvain Gelly , Jakob Uszko-333 reit , and Neil Houlsby . An image is worth 16x16 words : Transformers for image recognition at334 scale . In ICLR , 2021.335 Ioannis Exarchos , Yifeng Jiang , Wenhao Yu , and C Karen Liu . Policy transfer via kinematic domain336 randomization and adaptation . arXiv preprint arXiv:2011.01891 , 2020.337 Toshio Fukuda and Seiya Nakagawa . Dynamically reconfigurable robotic system . In ICRA , pp.338 1581–1586 . IEEE , 1988.339 Ali Ghadirzadeh , Xi Chen , Petra Poklukar , Chelsea Finn , Mårten Björkman , and Danica Kragic.340 Bayesian meta-learning for few-shot policy adaptation across robotic platforms . arXiv preprint341 arXiv:2103.03697 , 2021.342 Ross Girshick , Jeff Donahue , Trevor Darrell , and Jitendra Malik . Rich feature hierarchies for ac-343 curate object detection and semantic segmentation . In Proceedings of the IEEE conference on344 computer vision and pattern recognition , pp . 580–587 , 2014.345 Agrim Gupta , Silvio Savarese , Surya Ganguli , and Li Fei-Fei . Embodied intelligence via learning346 and evolution . Nature communications , 12 ( 1 ) :5721 , 2021.347 David Ha . Reinforcement learning for improving agent design . Artificial life , 25 ( 4 ) :352–365 , 2019.348 Leland H Hartwell , John J Hopfield , Stanislas Leibler , and Andrew W Murray . From molecular to349 modular cell biology . Nature , 402 ( 6761 ) : C47–C52 , 1999.350 Kaiming He , Haoqi Fan , Yuxin Wu , Saining Xie , and Ross Girshick . Momentum contrast for351 unsupervised visual representation learning . In CVPR , pp . 9729–9738 , 2020.352 Peter Henderson , Riashat Islam , Philip Bachman , Joelle Pineau , Doina Precup , and David Meger.353 Deep reinforcement learning that matters . In Thirty-Second AAAI Conference on Artificial Intel-354 ligence , 2018.355 Wenlong Huang , Igor Mordatch , and Deepak Pathak . One policy to control them all : Shared modular356 policies for agent-agnostic control . In ICML , pp . 4455–4464 . PMLR , 2020.357 Donald Joseph Hejna III , Pieter Abbeel , and Lerrel Pinto . Task-agnostic morphology evolution . In358 International Conference on Learning Representations , 2021 . URL https : //openreview.359 net/forum ? id=CGQ6ENUMX6.360 Nadav Kashtan and Uri Alon . Spontaneous evolution of modularity and network motifs . Proceed-361 ings of the National Academy of Sciences , 102 ( 39 ) :13773–13778 , 2005.362 Thomas Kipf and Max Welling . Semi-supervised classification with graph convolutional networks.363 ArXiv , abs/1609.02907 , 2017.364 Ashish Kumar , Zipeng Fu , Deepak Pathak , and Jitendra Malik . Rma : Rapid motor adaptation for365 legged robots . arXiv preprint arXiv:2107.04034 , 2021.366 Vitaly Kurin , Maximilian Igl , Tim Rocktäschel , Wendelin Boehmer , and Shimon Whiteson . My367 body is a cage : the role of morphology in graph-based incompatible control . In ICLR , 2021.368 Joonho Lee , Jemin Hwangbo , Lorenz Wellhausen , Vladlen Koltun , and Marco Hutter . Learning369 quadrupedal locomotion over challenging terrain . Science robotics , 5 ( 47 ) , 2020.370 Jimmy Lei Ba , Jamie Ryan Kiros , and Geoffrey E. Hinton . Layer Normalization . arXiv e-prints,371 art . arXiv:1607.06450 , July 2016.372 T. Liao , G. Wang , B. Yang , R. Lee , K. Pister , S. Levine , and R. Calandra . Data-efficient learning of373 morphology and controller for a microrobot . In 2019 International Conference on Robotics and374 Automation ( ICRA ) , pp . 2488–2494 , 2019. doi : 10.1109/ICRA.2019.8793802.375 Ajay Mandlekar , Danfei Xu , Josiah Wong , Soroush Nasiriany , Chen Wang , Rohun Kulkarni , Li Fei-376 Fei , Silvio Savarese , Yuke Zhu , and Roberto Martı́n-Martı́n . What matters in learning from offline377 human demonstrations for robot manipulation . In 5th Annual Conference on Robot Learning,378 2021 . URL https : //openreview.net/forum ? id=JrsfBJtDFdI.379 Nithin Mathews , Anders Lyhne Christensen , Rehan O ’ Grady , Francesco Mondada , and Marco380 Dorigo . Mergeable nervous systems for robots . Nature communications , 8 ( 1 ) :1–7 , 2017.381 Deepak Pathak , Christopher Lu , Trevor Darrell , Phillip Isola , and Alexei A Efros . Learning to382 control self-assembling morphologies : A study of generalization via modularity . In H. Wallach,383 H. Larochelle , A. Beygelzimer , F. d'Alché-Buc , E. Fox , and R. Garnett ( eds . ) , Advances in Neural384 Information Processing Systems , volume 32 . Curran Associates , Inc. , 2019.385 Xue Bin Peng , Marcin Andrychowicz , Wojciech Zaremba , and Pieter Abbeel . Sim-to-real transfer of386 robotic control with dynamics randomization . In 2018 IEEE international conference on robotics387 and automation ( ICRA ) , pp . 3803–3810 . IEEE , 2018.388 Rolf Pfeifer and Christian Scheier . Understanding intelligence . MIT press , 2001.389 Alec Radford , Karthik Narasimhan , Tim Salimans , and Ilya Sutskever . Improving language under-390 standing by generative pre-training ( 2018 ) , 2018.391 Michael Rubenstein , Alejandro Cornejo , and Radhika Nagpal . Programmable self-assembly in a392 thousand-robot swarm . Science , 345 ( 6198 ) :795–799 , 2014.393 Alvaro Sanchez-Gonzalez , Nicolas Heess , Jost Tobias Springenberg , Josh Merel , Martin Riedmiller,394 Raia Hadsell , and Peter Battaglia . Graph networks as learnable physics engines for inference and395 control . In ICML , pp . 4470–4479 . PMLR , 2018.396 Franco Scarselli , Marco Gori , Ah Chung Tsoi , Markus Hagenbuchner , and Gabriele Monfardini.397 The graph neural network model . IEEE transactions on neural networks , 20 ( 1 ) :61–80 , 2008.398 Charles Schaff , David Yunis , Ayan Chakrabarti , and Matthew R Walter . Jointly learning to construct399 and control agents using deep reinforcement learning . In ICRA , pp . 9798–9805 . IEEE , 2019.400 John Schulman , Filip Wolski , Prafulla Dhariwal , Alec Radford , and Oleg Klimov . Proximal Policy401 Optimization Algorithms . arXiv e-prints , art . arXiv:1707.06347 , July 2017.402 Karl Sims . Evolving 3d morphology and behavior by competition . Artificial life , 1 ( 4 ) :353–372,403 1994.404 Emanuel Todorov . Optimality principles in sensorimotor control . Nature neuroscience , 7 ( 9 ) :907–405 915 , 2004.406 Emanuel Todorov and Zoubin Ghahramani . Analysis of the synergies underlying complex hand407 manipulation . In The 26th Annual International Conference of the IEEE Engineering in Medicine408 and Biology Society , volume 2 , pp . 4637–4640 . IEEE , 2004.409 Emanuel Todorov , Tom Erez , and Yuval Tassa . Mujoco : A physics engine for model-based control.410 In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems , pp . 5026–5033.411 IEEE , 2012.412 Ashish Vaswani , Noam Shazeer , Niki Parmar , Jakob Uszkoreit , Llion Jones , Aidan N Gomez,413 Łukasz Kaiser , and Illia Polosukhin . Attention is all you need . In NIPS , 2017.414 Tingwu Wang , Renjie Liao , Jimmy Ba , and Sanja Fidler . Nervenet : Learning structured policy with415 graph neural networks . In ICLR , 2018.416 Tingwu Wang , Yuhao Zhou , Sanja Fidler , and Jimmy Ba . Neural graph evolution : Automatic robot417 design . In ICLR , 2019.418 Ronald J Williams . Simple statistical gradient-following algorithms for connectionist reinforcement419 learning . Machine learning , 8 ( 3-4 ) :229–256 , 1992.420 Mark Yim , Wei-Min Shen , Behnam Salemi , Daniela Rus , Mark Moll , Hod Lipson , Eric Klavins , and421 Gregory S Chirikjian . Modular self-reconfigurable robot systems [ grand challenges of robotics ] .422 IEEE Robotics & Automation Magazine , 14 ( 1 ) :43–52 , 2007.423 Allan Zhao , Jie Xu , Mina Konaković-Luković , Josephine Hughes , Andrew Spielberg , Daniela Rus,424 and Wojciech Matusik . Robogrammar : graph grammar for terrain-optimized robot design . ACM425 Transactions on Graphics ( TOG ) , 39 ( 6 ) :1–16 , 2020.426 A IMPLEMENTATION DETAILS427 In this section , we provide additional implementation details for our method.428 A.1 INPUT OBSERVATIONS429 In our setup , at each time step t ( we drop the time subscript for brevity ) the robot receives an430 observation sk = ( skm , s k p , s k g ) which is composed of the morphology representation ( s k m ) , the pro-431 prioceptive states ( skp ) and additional global sensory information ( s k g ) . We note that prior work432 ( Kurin et al. , 2021 ; Huang et al. , 2020 ) has defined morphology as the connectivity of the limbs.433 However , connectivity is only one aspect of morphology , and in general a substantial amount of434 additional information may be required to adequately describe the morphology . Consider a modular435 robot composed of n ∈ 1 ... Nk modules . For each module skmi consists of : ( 1 ) Module Parame-436 ters : Module shape parameters ( e.g . radius , height ) , material information ( e.g . density ) , and local437 geometric orientation and position of the child module with respect to the parent module . ( 2 ) Joint438 Parameters : This consists of information about joint type ( e.g . hinge ) and its properties ( e.g . joint439 range and axis ) , and actuator type ( e.g . motor ) and its properties ( e.g . gear ) . All this information can440 be found for example , in the Universal Robot Description Format ( URDF ) or in simulator specific441 kinematic trees ( e.g . MuJoCo XML ( Todorov et al. , 2012 ) ) .442 Similarly for each module skpi consists of the instantaneous state of the system : ( 1 ) Module Pro-443 prioception : 3D Cartesian position , 4D quaternion orientation , 3D linear velocity , and 3D angular444 velocity . ( 2 ) Joint Proprioception : Position and velocity in the generalized coordinates . Except the445 root node , each module will be connected to its parent module via a set of joints . We adopt the con-446 vention of providing joint parameters and proprioception information to the child node . We note that447 the importance of proprioceptive observations has also been observed in learning good locomotion448 ( Lee et al. , 2020 ; Kumar et al. , 2021 ) and manipulation policies ( Mandlekar et al. , 2021 ) .449 In general , additional global sensory information ( skg ) can be camera or depth sensor images . To450 save computation , we provide the information about the terrain as 2D heightmap sampled on a non-451 uniform grid . The grid is created by decreasing the sampling density as the distance from the root of452 the body increases . All heights are expressed relative to the height of the ground immediately under453 the root of the agent . The sampling points range from 1m behind the agent to 4m ahead of it along454 the direction of motion , as well as 4m to the left and right.455 A.2 ENVIRONMENTS456 All our training and testing environments are implemented in MuJoCo ( Todorov et al. , 2012 ) . For457 a detailed description about the environments and reward functions please refer Gupta et al . ( 2021 ) .458 Gupta et al . ( 2021 ) evolved UNIMALS in 3 different environments : flat terrain , variable terrain459 and manipulation in variable terrain . For each environment at the end of evolution , they had 100460 task optimized robots . Out of these 300 , we choose a subset of 100 UNIMALS as our train set . We461 ensure that no robot has the exact same kinematic tree or kinematic parameters . Similarly , we created462 a test set of 100 UNIMALS . Finally , for zero shot evaluation experiments we created the variants as463 described in § 5.3 . For creating kinematic variations for joint angles , we randomly selected a joint464 range for each joint from Table 2 which had alteast 50 % overlap with the original joint angle range.465 This helped in preventing robot variants which could not be trained.466 A.3 TRAINING HYPERPARAMETERS467 We use Transformers ( Vaswani et al. , 2017 ) to parametrize both policy and critic networks . Global468 sensory information is encoded using a 2-layer MLP with hidden dimensions [ 64 , 64 ] . We use469 Proximal Policy Optimization ( Schulman et al. , 2017 ) for joint training of agents in all environments.470 All hyperparameters for Transformer and PPO are listed in Table 1 . In addition , we use dynamic471 replay buffer sampling with α = 0.1 and β = 1.0 . For all our ablations and baselines , we use the472 same hyperparameters . For MLP baseline we use a 2-layer MLP with hidden dimensions [ 64 , 64 ] 473 and for GNN baseline we use a 5-layer MLP with hidden dimension 512 . The details of the joint474 training of modular robots are shown in Algorithm 1 . We ensure that all baselines and ablations475 have approximately the same number of parameters ( ∼ 3.3 Million ) .476 A.4 EVALUATION METHODOLOGY477 In general , the performance of RL algorithms is known to be strongly dependent on the choice of478 seed for random number generators ( Henderson et al. , 2018 ) . Hence , we run all our baseline and479 ablation experiments for 3 random seeds . We found that the training curves were robust to the choice480 of seeds as indicated by small standard deviation ( Fig . 4 , 10 ) . Consequently , we evaluate the zero481 shot performance using pre-trained model corresponding to a single seed ( Fig . 6 ) . For our transfer482 learning experiments ( Fig . 7 , 8 ) , we fine tune the model with the random seed corresponding to the483 one used for pre-training ( for all 3 seeds ) .484 B ADDITIONAL EXPERIMENTS485 B.1 BASELINES AND ABLATIONS486 Baselines : We compare against the following baselines:487 ( 1 ) MetaMorph-AO : Direct comparison to Amorpheus ( Kurin et al. , 2021 ) is not feasible due to the488 following reasons : ( 1 ) does not work with 3D morphologies with variable number of joints between489 limbs ; ( 2 ) does not incorporate exteroceptive observations ; ( 3 ) Amorpheus ensures a balanced col-490 lection of experience by sequentially collecting data on each morphology and maintaining a separate491 replay buffer per morphology . Further , updates to policy parameters are also performed sequentially.492 This sequential nature of the algorithm is not amenable to scaling , and would require ∼ 30 GPU493 days to train for 100 million iterations on Nvidia RTX 2080 , while our method only needs 1.5 GPU494 days ( ∼ 20x training speedup ) .495 Hence , we compare with MetaMorph-AO ( Amorpheus Observation ) as the closest variant to Amor-496 pheus , where we provide the same input observations as described in Kurin et al . ( 2021 ) . Specifi-497 cally , we provide the following input observations : one hot encoding of unique limb ID , 3D Carte-498 sian position , 4D quaternion orientation , 3D linear velocity , and 3D angular velocity , position in499 generalized coordinates , and normalized joint angle ranges . Note that although both MetaMorph500 and Amorpheus don ’ t explicitly incorporate the graph structure as input to the policy , our input ob-501 servations include additional morphology information ( see § B.1 ) . In contrast , except for joint angle502 ranges , Amorpheus does not incorporate morphology information.503 ( 2 ) Multi-Task MLP : We train a 6-504 layer MLP with a hidden dimension505 of 512 . The MLP receives the same506 observations as MetaMorph , and has507 the same number of parameters.508 Ablations : We perform additional509 ablation studies to understand the im-510 portance of learnt position embed-511 dings and morphology information512 in input observation . We refer our513 full method as MetaMorph and con-514 sider the following additional ab-515 lations : ( 1 ) MetaMorph-NMT ( No516 Morphology information + Task en-517 coding ) : we only provide sk =518 ( skp , s k g ) as inputs , i.e . the model does519 not have access to information about520 the robot morphology . In addition , we provide a context token with binary encoding to iden-521 tify the morphology ; ( 2 ) MetaMorph-HPE ( Hand-designed Position Encoding ) : we provide sk =522 ( skm , s k p , s k g ) as inputs , with s k m containing a per limb unique ID.523 Fig . 10 shows the learning curves for joint training in the flat terrain environment for 100 training524 morphologies . Multi-task MLP baseline struggles to learn a good policy , which we attribute to the525 difficulty of the task due to the rich diversity of morphologies . We next investigate the role of learnt526 position embeddings and morphological information . MetaMorph-HPE performs slightly better than527 MetaMorph-NPE , indicating that adding unique limb ids improves performance . However , there is528 still a large gap between MetaMorph and MetaMorph-HPE , suggesting that learnt position embed-529 dings capture additional information . Although MetaMorph-AO performs better than MetaMorph-530 NM due to the addition of unique limb ids and joint angle ranges , it is still significantly worse than531 MetaMorph as it does not incorporate morphological information . Finally , MetaMorph-NMT also532 performs poorly due to the lack of morphological information.533 B.2 POSITION EMBEDDING534 To better understand the information captured by position embeddings we visualize their cosine535 similarity . We found that across all seeds and in all environments ( we only show flat terrain for536 brevity ) a diagonal pattern emerged ( Fig 11 ) . Although , this might suggest that learnt position537 embeddings are capturing a unique position identifier , we found that manually specifying a unique538 limb identifier to perform much worse ( see MetaMorph-HPE in Fig . 10 ) . Thus , indicating that these539 embeddings are capturing additional useful information . We believe that further investigation is540 needed to better understand the information captured by position embeddings and defer it to future541 work.542 B.3 KINEMATIC TREE TRAVERSAL ORDER543 We first process an arbitrary robot by creating a 1D sequence of tokens corresponding to the depth544 first traversal of its kinematic tree . We note that the DFS ordering is not unique , and for nodes545 with similar depth , we adopted the convention of visiting the nodes which come first in the MuJoCo546 XML . This convention is also adopted by MuJoCo when parsing the XML . We found that zero-shot547 transfer of the learnt policy was not robust to an opposite ordering of nodes and the performance548 dropped by ∼ 75 % . However , we found that the policy could be made robust to variations in the549 DFS ordering by training the model with a simple data augmentation strategy of randomizing the550 visitation order of nodes at the same depth.551 C LIMITATIONS AND FUTURE WORK552 In this work , we explored how we can learn a universal controller for a large modular robot design553 space . We make a key assumption that we already had access to a large collection of robots which554 were optimized for the task of locomotion . Hence , currently we require a two stage pipeline where555 we first create robots and then pre-train them jointly . An important direction of future work would be556 to combine these two phases in an efficient manner . Moreover , although we found that MetaMorph557 has strong zero-shot generalization to unseen variations in kinematics and dynamics , there is indeed558 a performance drop on zero-shot generalization to new morphologies . Hence , an important avenue559 of exploration is creating algorithms for sample efficient transfer learning . Finally , our current suite560 of tasks focuses primarily on locomotion skills . An important line of future work will involve561 designing general purpose controllers which could perform multiple skills.562 Algorithm 1 MetaMorph : Joint Training of Modular Robots 1 : Input : πθ : policy function Vφ : value function R : replay buffer K : training pool of robots Pk : probability of sampling robot k 2 : Initialize : Learnable parameters θ for πθ , φ for Vφ , P uniform distribution 3 : for i=1,2 , .. Niter do 4 : # Collect robot experience in parallel 5 : for j=1,2 , .. Nworkers do 6 : Sample a robot k ∈ K , according to Equation 5 7 : # Collect one episode of experience 8 : τj ≡ { st , at , st+1 , rt+1 } j ∼ πθ 9 : # Add data to the buffer 10 : R← R ∪ τj 11 : end for 12 : # Update policy and value functions 13 : for j=1,2 , .. Nepochs do 14 : Sample a minibatch of data r from R 15 : πθ ← PPOUpdate ( πθ , r ) 16 : Vφ← PPOUpdate ( Vφ , r ) 17 : end for 18 : # Sample all robots uniformly at random initially 19 : if i > = Nwarmup then 20 : P ← samplingProbUpdate ( R ) , according to Equation 5 21 : end if 22 : end for | This paper presents MetaMorph, a Transformer based universal controller to learn behaviors across different robot morphologies. Their Transformer module takes a sequence of tokens as input, corresponding to the number of modules in the robot. Each input token comprises the proprioceptive and morphology information for each constituent robot’s module. MetaMorph is trained using a standard model-free Proximal Policy Optimization (PPO) method. However, to ensure all robots get to explore given environments and gather a similar amount of experience, the paper introduces a dynamic replay buffer which prioritizes robots’ selection for experience collection based on their previous iteration's episode lengths. The results are compared against MLP based experts trained for all individual robots, Graph Neural Network-based universal controller, and ablated modules of MetaMorph to highlight design choices. Moreover, results also demonstrate the generalization capacity of MetaMorph to different robots through zero-shot transfer and fine-tuning. | SP:516b27b0867e288ba30da79602356fe581d59d8a |
MetaMorph: Learning Universal Controllers with Transformers | Multiple domains like vision , natural language , and audio are witnessing tremen-1 dous progress by leveraging Transformers for large scale pre-training followed by2 task specific fine tuning . In contrast , in robotics we primarily train a single robot3 for a single task . However , modular robot systems now allow for the flexible com-4 bination of general-purpose building blocks into task optimized morphologies.5 However , given the exponentially large number of possible robot morphologies,6 training a controller for each new design is impractical . In this work , we pro-7 pose MetaMorph , a Transformer based approach to learn a universal controller8 over a modular robot design space . MetaMorph is based on the insight that robot9 morphology is just another modality on which we can condition the output of a10 Transformer . Through extensive experiments we demonstrate that large scale pre-11 training on a variety of robot morphologies results in policies with combinato-12 rial generalization capabilities , including zero shot generalization to unseen robot13 morphologies . We further demonstrate that our pre-trained policy can be used for14 sample-efficient transfer to completely new robot morphologies and tasks.15 1 INTRODUCTION16 The field of embodied intelligence posits that intelligent behaviours can be rapidly learned by agents17 whose morphologies are well adapted to their environment ( Brooks , 1991 ; Pfeifer & Scheier , 2001 ; 18 Bongard , 2014 ; Gupta et al. , 2021 ) . Based on this insight , a robot designer is faced with a predica-19 ment : should the robot design be task specific or general ? However , the sample inefficiency of20 tabula rasa deep reinforcement learning and the challenge of designing a single robot which can21 perform a wide variety of tasks has led to the current dominant paradigm of ‘ one robot one task ’ . In22 stark contrast , domains like vision ( Girshick et al. , 2014 ; He et al. , 2020 ) and language ( Dai & Le,23 2015 ; Radford et al. , 2018 ) , which are not plagued by the challenges of physical embodiment , have24 witnessed tremendous progress especially by leveraging large scale pre-training followed by trans-25 fer learning to many tasks through limited task-specific fine-tuning . Moreover , multiple domains26 are witnessing a confluence , with domain specific architectures being replaced by Transformers27 ( Vaswani et al. , 2017 ) , a general-purpose architecture with no domain-specific inductive biases.28 How can we bring to bear the advances in large-scale pre-training , transfer learning and general-29 purpose Transformer architectures , to the field of robotics ? We believe that modular robot systems30 provide a natural opportunity by affording the flexibility of combining a small set of general-purpose31 building blocks into a task-optimized morphology . Indeed , modularity at the level of hardware is a32 motif which is extensively utilized by evolution in biological systems ( Hartwell et al. , 1999 ; Kashtan33 & Alon , 2005 ) and by humans in many modern engineered systems . However , prior works ( Wang34 et al. , 2018 ; Chen et al. , 2018 ; Sanchez-Gonzalez et al. , 2018 ) on learning policies that can generalize35 across different robot morphologies have been limited to : ( 1 ) manually constructed variations of a36 single or few base morphologies , i.e . little diversity in the kinematic structure ; ( 2 ) low complexity37 of control ( ≤ 7 degrees of freedom ) ; ( 3 ) using Graph Neural Networks ( Scarselli et al. , 2008 ) based38 on the assumption that kinematic structure of the robot is the correct inductive bias.39 In this work , we take a step towards a more challenging setting ( Fig . 1 ) of learning a universal40 controller for a modular robot design space which has the following properties : ( a ) generalization41 to unseen variations in dynamics ( e.g . joint damping , armature , module mass ) and kinematics ( e.g.42 degree of freedom , morphology , module shape parameters ) and ( b ) sample-efficient transfer to new43 morphologies and tasks . We instantiate the exploration of this general setting in the UNIMAL design44 space introduced by Gupta et al . ( 2021 ) . We choose the UNIMAL design space as it contains a45 challenging ( 15−20 DoFs ) distribution of robots that can learn locomotion and mobile manipulation46 in complex stochastic environments . Learning a single universal controller for a huge variety of47 robot morphologies is difficult due to : ( 1 ) differences in action space , sensory input , morphology,48 dynamics , etc . ( 2 ) given a modular design space , not all robots are equally adept at learning a task,49 e.g . some robots might inherently be less sample-efficient ( Gupta et al. , 2021 ) .50 To this end , we propose MetaMorph , a method to learn a universal controller for a modular robot51 design space . MetaMorph is based on the insight that robot morphology is just another modal-52 ity on which we can condition the output of a Transformer . MetaMorph tackles the challenge of53 differences in embodiment by leveraging a Transformer based architecture which takes as input a54 sequence of tokens corresponding to the number of modules in the robot . Each input token is created55 by combining proprioceptive and morphology information at the level of constituent modules . The56 combination of proprioceptive and embodiment modalities and large scale joint pre-training leads57 to policies which exhibit zero-shot generalization to unseen variations in dynamics and kinemat-58 ics parameters and sample-efficient transfer to new morphologies and tasks . Finally , to tackle the59 differences in learning speeds of different robots , we propose dynamic replay buffer balancing to60 dynamically balance the amount of experience collection for a robot based on its performance.61 In sum , our key contributions are : ( 1 ) we introduce MetaMorph to learn a universal controller for a62 modular design space consisting of robots with high control complexity for challenging 3D locomo-63 tion tasks in stochastic environments ; ( 2 ) we showcase that our learned policy is able to zero-shot64 generalize to unseen variations in dynamics , kinematics , new morphologies and tasks , which is par-65 ticularly useful in real-world settings where controllers need to be robust to hardware failures ; ( 3 ) 66 we analyze the learned attention mask and discover the emergence of motor synergies ( Bernstein,67 1966 ) , which partially explains how MetaMorph is able to control a large number of robots.68 2 RELATED WORK69 Prior works on learning control policies which can generalize across robot morphologies have pri-70 marily focused on parametric variations of a single ( Chen et al. , 2018 ) or few ( 2 − 3 ) robot types71 ( Wang et al. , 2018 ; Sanchez-Gonzalez et al. , 2018 ; Huang et al. , 2020 ; Kurin et al. , 2021 ) . For gener-72 alizing across parametric variations of a single morphology , various approaches have been proposed73 like using a learned hardware embedding ( Chen et al. , 2018 ) , meta-learning for policy adaptation74 ( Al-Shedivat et al. , 2017 ; Ghadirzadeh et al. , 2021 ) , kinematics randomization ( Exarchos et al.,75 2020 ) , and dynamics randomization ( Peng et al. , 2018 ) . In case of multiple different morpholo-76 gies , one approach to tackle the challenge of differences in action and state spaces is to leverage77 Graph Neural Networks ( Scarselli et al. , 2008 ; Kipf & Welling , 2017 ; Battaglia et al. , 2018 ) . Wang78 et al . ( 2018 ) ; Huang et al . ( 2020 ) use GNNs to learn joint controllers for planar agents ( ≤ 7 DoFs ) .79 Blake et al . ( 2021 ) propose freezing selected parts of networks to enable training GNNs for a single80 morphology but with higher control complexity . The usage of GNNs is based on the assumption81 that the robot morphology is a good inductive bias to incorporate into neural controllers , which can82 be naturally modelled by GNNs . Recently , Kurin et al . ( 2021 ) also proposed using Transformers83 for training planar agents . Our work differs from Kurin et al . ( 2021 ) in the diversity and scale of84 training robots , complexity of the environments , conditioning the Transformer on morphological85 information , and showcasing strong generalization to unseen morphologies and tasks ( see § B.1 ) .86 Another closely related line of work is the design of modular robot design spaces and developing al-87 gorithms for co-optimizing morphology and control ( Sims , 1994 ) within a design space to find task-88 optimized combinations of controller and robot morphology . When the control complexity is low,89 evolutionary strategies have been successfully applied to find diverse morphologies in expressive90 soft robot design spaces ( Cheney et al. , 2014 ; 2018 ) . In the case of rigid bodies , Ha ( 2019 ) ; Schaff91 et al . ( 2019 ) ; Liao et al . ( 2019 ) have proposed using RL for finding optimal module parameters of92 fixed hand-designed morphology for rigid body robots . For more expressive design spaces , GNNs93 have been leveraged to share controller parameters ( Wang et al. , 2019 ) across generations or develop94 novel heuristic search methods for efficient exploration of the design space ( Zhao et al. , 2020 ) . In95 contrast to task specific morphology optimization , III et al . ( 2021 ) propose evolving morphologies96 without any task or reward specification . Finally , for self reconfigurable modular robots ( Fukuda & 97 Nakagawa , 1988 ; Yim et al. , 2007 ) , modular control has been utilized in both real ( Rubenstein et al.,98 2014 ; Mathews et al. , 2017 ) and simulated ( Pathak et al. , 2019 ) systems.99 3 LEARNING A UNIVERSAL CONTROLLER100 We begin by reviewing the UNIMAL design space and formulating the problem of learning a uni-101 versal controller for a modular robot design space as a multi-task reinforcement learning problem.102 3.1 THE UNIMAL DESIGN SPACE103 An agent morphology can be naturally represented as a kinematic tree , or a directed acyclic graph,104 corresponding to a hierarchy of articulated 3D rigid parts connected via motor actuated hinge joints.105 The graph G : = ( V , E ) consists of vertices V = { v1 , ... , vn } corresponding to modules of the design106 space , and edges eij ∈ E corresponding to joints between vi and vj . Concretely , in the UNIMAL107 ( Gupta et al. , 2021 ) design space , each node represents a component which can be one of two types:108 ( 1 ) a sphere parameterized by radius and density to represent the head of the agent and form the root109 of the tree ; ( 2 ) cylinders parameterized by length , radius , and density to represent the limbs of the110 robot . Two nodes of the graph can be connected via at most two motor-actuated hinge joints ( i.e . G111 is a multi-graph ) , parameterized by joint axis , joint limits and a motor gear ratio.112 3.2 JOINT POLICY OPTIMIZATION113 The problem of learning a universal controller for a set of K robots drawn from a modular robot de-114 sign space is a multi-task RL problem . Specifically , the control problem for each robot is an infinite-115 horizon discounted Markov decision process ( MDP ) represented by a tuple ( S , A , T , R , H , γ ) ,116 where S represents the set of states , A represents the set of available actions , T ( st+1|st , at ) rep-117 resents the transition dynamics , R ( s , a ) is a reward function , H is the horizon and γ is the discount118 factor . At each time step , the robot k receives an observation skt , takes an action a k t , and is given a119 reward rkt . A policy πθ ( a k t |skt ) models the conditional distribution over action akt ∈ A given state120 skt ∈ S. The goal is to find policy parameters θ which maximize the average expected return across121 all tasks : R = 1K ∑K k=0 ∑H t=0 γ trkt . We use Proximal Policy Optimization ( PPO ) ( Schulman et al.,122 2017 ) , a popular policy gradient ( Williams , 1992 ) method for optimizing this objective.123 4 METAMORPH124 Progress in model-free reinforcement learning algorithms has made it possible to train locomotion125 policies for complex high-dimensional agents from scratch , albeit with tedious hyperparamter tun-126 ing . However , this approach is not suitable for modular design spaces containing exponentially many127 robot morphologies . Indeed , Gupta et al . ( 2021 ) estimates that the UNIMAL design space contains128 more than 1018 robots . Hence , learning a separate policy for each robot is infeasible . However , the129 modular nature of the design space implies that while each robot morphology is unique , it is still130 constructed from the same set of modules and potentially shares subgraphs of the kinematic tree131 with other morphologies . We describe how MetaMorph exploits this structure to meet the challenge132 of learning a universal controller for different morphologies.133 4.1 FUSING PROPRIOCEPTIVE STATES AND MORPHOLOGY REPRESENTATIONS134 To learn policies that can generalize across morphologies , we must encode not only proprioceptive135 states essential for controlling a single robot , but also morphological information . From a multi-task136 RL perspective , this information can be viewed as a task identifier , where each task corresponds to137 a different robot morphology , all drawn from the same modular design space . Hence , instead of138 learning a policy which is agnostic to the robot morphology , we need to learn a policy conditioned139 on the robot morphology . Consequently , at each time step t ( we drop the time subscript for brevity ) ,140 the robot receives an observation sk = ( skm , s k p , s k g ) which is composed of the morphology repre-141 sentation ( skm ) , the proprioceptive states ( s k p ) , and additional global sensory information ( s k g ) . See142 § A.1 for a detailed description of each observation type.143 4.2 MORPHOLOGY AWARE TRANSFORMER144 The robot chooses its action via a stochastic policy πθ ( akt |skt ) where θ are the parameters of a pair145 of deep neural networks : a policy network that produces an action distribution ( Fig . 2 ) , and a critic146 network that predicts discounted future returns . We use Transformers ( Vaswani et al. , 2017 ) to147 parametrize both policy and critic networks as described in detail below.148 Encode . We make a distinction between how we process local and global state information . Concretely , let sk = ( skl , s k g ) where s k l = ( s k m , s k p ) . Since the number of joints between two modules can vary , we zero pad skli to ensure that input observation vectors are of the same size , i.e . s k l ∈ RN×M . In order to provide an arbitrary robot morphology as input to the Transformer , we first create a 1D sequence of local observation vectors by traversing the kinematic tree in depth first order starting at the root node ( torso in case of the UNIMAL design space ) . We then apply a single layer MLP independently to skli to create a D dimensional module embedding . We also add learnable 1D position embeddings to the module embeddings to automatically learn positional information : m0 = [ φ ( s k l1 ; We ) ; · · · ; φ ( s k lN ; We ) ] + Wpos , ( 1 ) where φ ( · ) is the embedding function , We ∈ RM×D are the embedding weights , and Wpos ∈149 RN×D are the learned positional embeddings . Note that in practice we zero pad the input sequence150 of local observation vectors for efficient batch processing of multiple morphologies.151 Process . From the module embeddings described above , we obtain the output feature vectors as : m′ ` = MSA ( LN ( m ` −1 ) ) + m ` −1 , ` = 1 . . . L ( 2 ) m ` = MLP ( LN ( m ′ ` ) ) + m ′ ` , ` = 1 . . . L ( 3 ) where MSA is multi-head attention ( Vaswani et al. , 2017 ) and LN is Layernorm ( Lei Ba et al. , 2016 ) .152 Decode . We integrate the global state information skg consisting of high-dimensional sensory input from camera or depth sensors . Naively concatenating skg and s k l in the encoder has two downsides : ( 1 ) it dilutes the importance of low-dimensional local sensory and morphological information ; ( 2 ) it increases the number of Transformer parameters due to an increase in the dimensionality of the input embedding ( D ) . Instead , we obtain the outputs of policy network as follows : g = γ ( skg ; Wg ) , µ ( s k ) = φ ( mli , g ; Wd ) , πθ ( a k|sk ) = N ( { µ ( ski ) } Ni , Σ ) , ( 4 ) where γ ( · ) is a 2-layer MLP with parameters Wg , φ ( · ) is an embedding function with Wd as the153 embedding weights . The action distribution is modeled as a Gaussian distribution with a state-154 dependent mean µ ( ski ) and a fixed diagonal covariance matrix Σ . Similarly , for the critic network,155 we estimate value for the whole morphology by averaging the value per limb.156 4.3 DYNAMIC REPLAY BUFFER BALANCING157 Joint training of diverse morphologies is challenging as different morphologies are adept at per-158 forming different tasks . Consequently , some morphologies might be inherently better suited for the159 pre-training task . Let us consider two robots : ( A ) Robot A locomotes in a falling forward manner,160 i.e. , robot A is not passively stable ; ( B ) Robot B is passively stable . Especially with early termina-161 tion , robot A will keep falling during the initial phases of training , which results in shorter episode162 lengths , whereas robot B will have longer episode lengths . Hence , more data will be collected for163 robot B in the earlier phases of training , and in turn will lead to robot B learning even faster , thereby164 resulting in a ‘ rich gets richer ’ training dynamic . However , our goal is to ensure that all morpholo-165 gies have a similar level of performance at the end of training as we want to generalize across the166 entire distribution of morphologies.167 To address this issue , we propose a simple dynamic replay buffer balancing scheme . On-policy algorithms like PPO ( Schulman et al. , 2017 ) proceed in iterations that alternate between experience collection and policy parameter update . Let Ek be any performance metric of choice , e.g . normalized reward , episode length , success ratio , etc . Let τ be the training iteration number . At each iteration , we sample the kth robot with probability Pk , given by : Pk = Eβk∑ Eβi Eτk = αEτk + ( 1− α ) E ( τ−1 ) k , ( 5 ) where α ∈ [ 0 , 1 ] is the discount factor and the exponent β determines the degree of dynamic pri-168 oritization , with β = 0 corresponding to the uniform case . In practice , we use episode length as169 our performance metric . We determine Pk by replacing Ei with 1000Ei , where the numerator is the170 maximum episode length.171 5 EXPERIMENTS172 In this section , we evaluate our method MetaMorph in different environments , perform extensive ab-173 lation studies of different design choices , test zero-shot generalization to variations in dynamics and174 kinematics parameters , and demonstrate sample efficient transfer to new morphologies and tasks.175 For qualitative results , please refer to the video on our project website 1.176 1https : //metamorph-iclr.github.io/site/ 5.1 EXPERIMENTAL SETUP177 We create a training set of 100 robots from the UNIMAL design space ( Gupta et al. , 2021 ) ( see178 § A.2 ) . We evaluate MetaMorph on three different environments ( Fig . 3 ) using the MuJoCo sim-179 ulator ( Todorov et al. , 2012 ) . In all 3 environments the goal of the agent is to maximize forward180 displacement over the course of an episode which lasts 1000 timesteps . The 3 environments are : ( 1 ) 181 Flat terrain ( FT ) ; ( 2 ) Variable terrain ( VT ) : VT is an extremely challenging environment as during182 each episode a new terrain is generated by randomly sampling a sequence of terrain types and inter-183 leaving them with flat terrain . We consider 3 types of terrains : hills , steps , and rubble ; ( 3 ) Obstacles:184 cuboid shaped obstacles of varying sizes on flat terrain.185 We use a dense morphology independent reward function for all our tasks as it is not feasible to186 design a reward function tailored to each morphology . In all tasks , our reward function promotes187 forward movement using small joint torques ( the latter obtained via a small energy usage penalty ) .188 In addition , as described in §4.3 , we use early termination across all environments when we detect a189 fall ( i.e . if the torso height drops below 50 % ( FT , Obstacles ) or 30 % ( VT ) of its initial height ) .190 5.2 BASELINES AND ABLATIONS191 Baselines : We compare against the following baselines:192 ( 1 ) GNN : We modify the NerveNet model proposed by193 Wang et al . ( 2018 ) to learn control policies for complex194 3D morphologies with variable number of joints . Specif-195 ically , we replace how the NerveNet model receives in-196 put and produces output by our encode and decode steps197 respectively ( §4.2 ) . In addition , the model receives the198 same observation as MetaMorph i.e . sk = ( skm , s k p , s k g ) 199 and is trained with dynamic replay buffer sampling . Thus,200 the only difference is in the process step . This helps test if201 the domain-specific inductive bias of the robot kinematic202 tree in the GNN is necessary.203 ( 2 ) MLP : We train all 100 robots separately with a 2-layer204 MLP and report the average performance . This baseline205 serves as an approximate upper bound for our method.206 Ablations : We also do an ablation study of different de-207 sign choices involved in our method . We refer our full208 method as MetaMorph and consider the following abla-209 tions : ( 1 ) MetaMorph-NPE : no learned position embeddings ; ( 2 ) MetaMorph-NM : we only provide210 sk = ( skp , s k g ) as inputs , i.e . the model does not have access to information about the robot morphol-211 ogy.212 Fig . 4 shows learning curves across 3 environments for training 100 morphologies . In all environ-213 ments MetaMorph can successfully match the average reward achieved via the per morphology MLP214 baseline on both FT , and obstacle environment . While MetaMorph performance in VT is slightly215 below the MLP baseline , we note that it has not saturated and we stopped training at 108 iterations216 across all three environments . Moreover , MetaMorph is significantly more sample-efficient ( 5× ) 217 than training independent MLPs ( 5 × 106 iterations per robot ) . The GNN baseline saturates at a218 level 2 to 3 times below MetaMorph . In GNN based models , locality and neighborhood connectiv-219 ity is explicitly baked into the model . Interestingly , just like ViT ( Dosovitskiy et al. , 2021 ) sparingly220 utilizes the 2D neighborhood structure of images at the beginning of the model by cutting the image221 into patches , MetaMorph uses the graph structure of the robot in the beginning by creating a 1D222 sequence corresponding to the kinematic tree by traversing the graph in depth first order . More-223 over , the position embeddings carry no information about the graph structure and are learned from224 scratch . We highlight that the learned position embeddings significantly improve the performance225 of MetaMorph , just as they do in Transformer based image classification . Finally , without access226 to the morphological information , MetaMorph-NM fails to learn a policy that can control diverse227 robot morphologies . All of this substantiates our central claim that morphological state information228 is necessary to learn successful control policies , although the kinematic graph need not be explicitly229 baked into neural architectures to learn policies capable of controlling diverse robot morphologies.230 Finally , we test the importance of dynamic replay buffer balancing in Fig . 5 , and find that balancing231 is necessary to learn a good control policy in 10− 15 % of robots across all 3 environments.232 5.3 ZERO-SHOT GENERALIZATION233 Our focus in this work is to learn policies that can generalize to unseen robots drawn from a modular234 robot design space . In this section , we demonstrate that MetaMorph shows favorable generalization235 properties across many different kinematic and dynamic variations.236 Experimental Setup . For each of the 100 training robots , we create a dedicated test set to test zero-237 shot transfer performance across two types of variations : dynamics ( armature , density , damping , and238 motor gear ) and kinematics ( module shape parameters like radius , and length of cylindrical limbs,239 and joint angles ) . For each training robot , we randomly create 4 different variants for each property,240 i.e . 400 robots with armature variations , and so on . While creating a new variant , we change the241 0 2 4 6 50 100 150 5e w ar d ( sFaSe 0 1 2 3 4 0 500 1000 1500 2000 2EstaFOes ( FyOinders ) ) inetune 6FratFh 0 . 0.2 0.4 0.6 0.8 1.0 IteratiRns ( ×107 ) 0.00 0.25 0.50 0.75 1.0 Obstacles ( cylinders ) Escape 2x3x 0 2 4 6 50 100 150 5e w ar d ( sFaSe 0 1 2 3 4 0 500 1000 1500 2000 2EstaFOes ( FyOinders ) ) inetune 6FratFh 0 . 0.2 0.4 0.6 0.8 1.0 IteratiRns ( ×107 ) 0.00 0.25 0.50 0.75 1.0 0 2 4 6 50 100 150 5e w ar d ( sFaSe 0 1 2 3 4 0 500 1000 1500 2000 2EstaFOes ( FyOinders ) ) inetune 6FratFh 0 . 0.2 0.4 0.6 0.8 1.0 IteratiRns ( ×107 ) 0.00 0.25 0.50 0.75 1.0 Figure 8 : Fine tuning : New robot morphologies and tasks . Left : Test environments . Right : Comparison of reward progression of 100 test robots averaged over 3 runs for pre-trained MetaMorph ( VT→ Escape , Obstacles→ Obstacles ( cylinders ) ) vs from scratch . Shaded regions denotes standard deviation . Across all environments pre-training leads to strong zero-shot performance and 2− 3× savings in training iterations to achieve the same level of average reward . relevant property of all modules or joints . See Table 2 for sampling ranges . We then compare242 zero-shot performance averaged over 10 trials.243 Generalization : Dynamics . First , we consider generalization to different dynamics ( Fig . 6 ) . We244 find consistently that MetaMorph performs significantly better than MetaMorph-NM and GNN245 across all types of dynamic variations and all environments . In fact , the difference is more pro-246 nounced for harder tasks like VT and Obstacles . We note that this result is surprising as we do not247 do dynamics randomization during training , i.e. , all robots in the training set have the same armature248 and damping parameters . Despite this , we see strong generalization performance.249 Generalization : Kinematics . Next , we consider generalization to different kinematics parameters250 ( Fig . 6 ) . This is a significantly more challenging setting as the model has to generalize to unseen251 variations in module shape parameters and changes to joint angle ranges . In fact , changes to joint252 angles can significantly alter the range of possible motion and might necessitate a different gait253 for successful locomotion . Consequently , we find that even though MetaMorph exhibits strong254 generalization performance compared to MetaMorph-NM and GNN in all 3 environments , there255 is indeed a performance drop for the challenging setting of variations in joint angles . However , the256 zero-shot performance is encouraging and motivates our next set of experiments on transfer learning.257 258 5.4 SAMPLE EFFICIENT TRANSFER LEARNING259 Experimental Setup . Here we create 100 robots from the UNIMAL design space which were260 not part of the training set , and we demonstrate that MetaMorph shows favorable sample efficient261 transfer to unseen morphologies , and even unseen morphologies performing novel tasks.262 Different morphologies . We first consider sample efficient learning of controllers for new mor-263 phologies on the same task by taking a pre-trained MetaMorph model on an environment , and then264 fine tuning it to learn to control morphologies in the test set . In Fig . 7 we compare the number of265 training iterations required to achieve a particular performance level when we fine tune MetaMorph266 vs training from scratch on the test set . Across all 3 environments we not only find strong zero-shot267 performance , but also 2 to 3 times higher sample efficiency compared to training from scratch.268 Different morphologies and novel tasks . Finally , we consider the most realistic and general setting269 mimicking the promise of modular robots , where we are faced with a novel task and want to use a270 new robot morphology which may be suited for this task . We consider the same set of 100 test271 robots on two new tasks ( Fig . 8 ) : ( 1 ) Escape : The agent starts at the center of a bowl shaped terrain272 surrounded by small hills ( bumps ) , and has to maximize the geodesic distance from the start location273 ( escape the hilly region ) . ( 2 ) Obstacles ( cylinders ) : cylinder shaped obstacles of varying sizes ( the274 size distribution is also different from the train task i.e . cuboid shapes ) . We transfer the learned275 policy from VT and Obstacles to Escape and Obstacles-Cylinders respectively . Again , we find that276 there is a strong zero-shot performance across all 3 environments and fine-tuning is 2 to 3 times277 more sample efficient than training from scratch.278 5.5 EMERGENT MOTOR SYNERGIES279 We next search for a potential explanation for how MetaMorph can coordinate the large number of280 DoFs ( ∼ 16 × 100 ) across several agents . Our hypothesis is inspired by Bernstein ( 1966 ) , which281 proposed the existence of muscle synergies as a neural strategy to simplify the control of multiple282 DoFs by the central nervous system . A muscle synergy corresponds to the constrained movements of283 sets of joints ( or muscles ) through co-activation by a single neural signal . Such synergies obviate the284 need to control all joints independently , by coupling sets of joints into adaptive functional groups.285 Although the definition of synergies ( Bruton & O ’ Dwyer , 2018 ) vary in the literature , dimension-286 ality reduction is generally accepted as a signature of synergistic control ( Todorov & Ghahramani,287 2004 ; Todorov , 2004 ) . Consequently , to test this hypothesis , in Fig . 9 we plot the stable rank of288 the attention matrix . For attention matrix Al ∈ Rm×m for layer l , the stable rank is defined as:289 sr ( Al ) = ‖Al‖2F ‖Al‖2 = ∑ σ2i σ2max , where σi are the singular values of Al . We note that sr ( Al ) is small290 and oscillates between two values which correspond to attention maps where groups of limbs are291 activated simultaneously ( denoted by dark columns ) , a characteristic signature of motor synergies.292 Hence , MetaMorph simplifies the control problem by learning to activate different motor synergies293 depending on both skm and s k p .294 6 CONCLUSION295 In this work , we explored how we can learn a universal controller for a large modular robot design296 space . To this end , we proposed MetaMorph , a Transformer approach based on the insight that robot297 morphology is just another modality on which we can condition the output of a Transformer . We298 showcased that pre-training on a large collection of diverse morphologies leads to policies which can299 generalize to unseen variations in kinematics , dynamics , new morphologies , and tasks . We hope that300 our work serves as a step towards realizing the potential of large-scale pre-training and fine-tuning301 in the field of robotics , a paradigm that has seen tremendous success in vision and language.302 REFERENCES303 Maruan Al-Shedivat , Trapit Bansal , Yuri Burda , Ilya Sutskever , Igor Mordatch , and Pieter Abbeel.304 Continuous adaptation via meta-learning in nonstationary and competitive environments . arXiv305 preprint arXiv:1710.03641 , 2017.306 Peter W Battaglia , Jessica B Hamrick , Victor Bapst , Alvaro Sanchez-Gonzalez , Vinicius Zambaldi,307 Mateusz Malinowski , Andrea Tacchetti , David Raposo , Adam Santoro , Ryan Faulkner , et al.308 Relational inductive biases , deep learning , and graph networks . arXiv preprint arXiv:1806.01261,309 2018.310 Nikolai Bernstein . The co-ordination and regulation of movements . The co-ordination and regula-311 tion of movements , 1966.312 Charlie Blake , Vitaly Kurin , Maximilian Igl , and Shimon Whiteson . Snowflake : Scaling gnns to313 high-dimensional continuous control via parameter freezing . arXiv preprint arXiv:2103.01009,314 2021.315 Josh Bongard . Why morphology matters . The horizons of evolutionary robotics , 6:125–152 , 2014.316 Rodney A Brooks . New approaches to robotics . Science , 253 ( 5025 ) :1227–1232 , 1991.317 Michaela Bruton and Nicholas O ’ Dwyer . Synergies in coordination : A comprehensive overview318 of neural , computational , and behavioral approaches . Journal of Neurophysiology , 120 ( 6 ) :2761–319 2774 , 2018.320 Tao Chen , Adithyavairavan Murali , and Abhinav Gupta . Hardware conditioned policies for multi-321 robot transfer learning . In NIPS , 2018.322 Nick Cheney , Robert MacCurdy , Jeff Clune , and Hod Lipson . Unshackling evolution : Evolving323 soft robots with multiple materials and a powerful generative encoding . SIGEVOlution , 7 ( 1 ) :324 11–23 , August 2014. doi : 10.1145/2661735.2661737 . URL https : //doi.org/10.1145/325 2661735.2661737.326 Nick Cheney , Josh Bongard , Vytas SunSpiral , and Hod Lipson . Scalable co-optimization of mor-327 phology and control in embodied machines . Journal of The Royal Society Interface , 15 ( 143 ) :328 20170937 , 2018.329 Andrew M Dai and Quoc V Le . Semi-supervised sequence learning . Advances in neural information330 processing systems , 28:3079–3087 , 2015.331 Alexey Dosovitskiy , Lucas Beyer , Alexander Kolesnikov , Dirk Weissenborn , Xiaohua Zhai , Thomas332 Unterthiner , Mostafa Dehghani , Matthias Minderer , Georg Heigold , Sylvain Gelly , Jakob Uszko-333 reit , and Neil Houlsby . An image is worth 16x16 words : Transformers for image recognition at334 scale . In ICLR , 2021.335 Ioannis Exarchos , Yifeng Jiang , Wenhao Yu , and C Karen Liu . Policy transfer via kinematic domain336 randomization and adaptation . arXiv preprint arXiv:2011.01891 , 2020.337 Toshio Fukuda and Seiya Nakagawa . Dynamically reconfigurable robotic system . In ICRA , pp.338 1581–1586 . IEEE , 1988.339 Ali Ghadirzadeh , Xi Chen , Petra Poklukar , Chelsea Finn , Mårten Björkman , and Danica Kragic.340 Bayesian meta-learning for few-shot policy adaptation across robotic platforms . arXiv preprint341 arXiv:2103.03697 , 2021.342 Ross Girshick , Jeff Donahue , Trevor Darrell , and Jitendra Malik . Rich feature hierarchies for ac-343 curate object detection and semantic segmentation . In Proceedings of the IEEE conference on344 computer vision and pattern recognition , pp . 580–587 , 2014.345 Agrim Gupta , Silvio Savarese , Surya Ganguli , and Li Fei-Fei . Embodied intelligence via learning346 and evolution . Nature communications , 12 ( 1 ) :5721 , 2021.347 David Ha . Reinforcement learning for improving agent design . Artificial life , 25 ( 4 ) :352–365 , 2019.348 Leland H Hartwell , John J Hopfield , Stanislas Leibler , and Andrew W Murray . From molecular to349 modular cell biology . Nature , 402 ( 6761 ) : C47–C52 , 1999.350 Kaiming He , Haoqi Fan , Yuxin Wu , Saining Xie , and Ross Girshick . Momentum contrast for351 unsupervised visual representation learning . In CVPR , pp . 9729–9738 , 2020.352 Peter Henderson , Riashat Islam , Philip Bachman , Joelle Pineau , Doina Precup , and David Meger.353 Deep reinforcement learning that matters . In Thirty-Second AAAI Conference on Artificial Intel-354 ligence , 2018.355 Wenlong Huang , Igor Mordatch , and Deepak Pathak . One policy to control them all : Shared modular356 policies for agent-agnostic control . In ICML , pp . 4455–4464 . PMLR , 2020.357 Donald Joseph Hejna III , Pieter Abbeel , and Lerrel Pinto . Task-agnostic morphology evolution . In358 International Conference on Learning Representations , 2021 . URL https : //openreview.359 net/forum ? id=CGQ6ENUMX6.360 Nadav Kashtan and Uri Alon . Spontaneous evolution of modularity and network motifs . Proceed-361 ings of the National Academy of Sciences , 102 ( 39 ) :13773–13778 , 2005.362 Thomas Kipf and Max Welling . Semi-supervised classification with graph convolutional networks.363 ArXiv , abs/1609.02907 , 2017.364 Ashish Kumar , Zipeng Fu , Deepak Pathak , and Jitendra Malik . Rma : Rapid motor adaptation for365 legged robots . arXiv preprint arXiv:2107.04034 , 2021.366 Vitaly Kurin , Maximilian Igl , Tim Rocktäschel , Wendelin Boehmer , and Shimon Whiteson . My367 body is a cage : the role of morphology in graph-based incompatible control . In ICLR , 2021.368 Joonho Lee , Jemin Hwangbo , Lorenz Wellhausen , Vladlen Koltun , and Marco Hutter . Learning369 quadrupedal locomotion over challenging terrain . Science robotics , 5 ( 47 ) , 2020.370 Jimmy Lei Ba , Jamie Ryan Kiros , and Geoffrey E. Hinton . Layer Normalization . arXiv e-prints,371 art . arXiv:1607.06450 , July 2016.372 T. Liao , G. Wang , B. Yang , R. Lee , K. Pister , S. Levine , and R. Calandra . Data-efficient learning of373 morphology and controller for a microrobot . In 2019 International Conference on Robotics and374 Automation ( ICRA ) , pp . 2488–2494 , 2019. doi : 10.1109/ICRA.2019.8793802.375 Ajay Mandlekar , Danfei Xu , Josiah Wong , Soroush Nasiriany , Chen Wang , Rohun Kulkarni , Li Fei-376 Fei , Silvio Savarese , Yuke Zhu , and Roberto Martı́n-Martı́n . What matters in learning from offline377 human demonstrations for robot manipulation . In 5th Annual Conference on Robot Learning,378 2021 . URL https : //openreview.net/forum ? id=JrsfBJtDFdI.379 Nithin Mathews , Anders Lyhne Christensen , Rehan O ’ Grady , Francesco Mondada , and Marco380 Dorigo . Mergeable nervous systems for robots . Nature communications , 8 ( 1 ) :1–7 , 2017.381 Deepak Pathak , Christopher Lu , Trevor Darrell , Phillip Isola , and Alexei A Efros . Learning to382 control self-assembling morphologies : A study of generalization via modularity . In H. Wallach,383 H. Larochelle , A. Beygelzimer , F. d'Alché-Buc , E. Fox , and R. Garnett ( eds . ) , Advances in Neural384 Information Processing Systems , volume 32 . Curran Associates , Inc. , 2019.385 Xue Bin Peng , Marcin Andrychowicz , Wojciech Zaremba , and Pieter Abbeel . Sim-to-real transfer of386 robotic control with dynamics randomization . In 2018 IEEE international conference on robotics387 and automation ( ICRA ) , pp . 3803–3810 . IEEE , 2018.388 Rolf Pfeifer and Christian Scheier . Understanding intelligence . MIT press , 2001.389 Alec Radford , Karthik Narasimhan , Tim Salimans , and Ilya Sutskever . Improving language under-390 standing by generative pre-training ( 2018 ) , 2018.391 Michael Rubenstein , Alejandro Cornejo , and Radhika Nagpal . Programmable self-assembly in a392 thousand-robot swarm . Science , 345 ( 6198 ) :795–799 , 2014.393 Alvaro Sanchez-Gonzalez , Nicolas Heess , Jost Tobias Springenberg , Josh Merel , Martin Riedmiller,394 Raia Hadsell , and Peter Battaglia . Graph networks as learnable physics engines for inference and395 control . In ICML , pp . 4470–4479 . PMLR , 2018.396 Franco Scarselli , Marco Gori , Ah Chung Tsoi , Markus Hagenbuchner , and Gabriele Monfardini.397 The graph neural network model . IEEE transactions on neural networks , 20 ( 1 ) :61–80 , 2008.398 Charles Schaff , David Yunis , Ayan Chakrabarti , and Matthew R Walter . Jointly learning to construct399 and control agents using deep reinforcement learning . In ICRA , pp . 9798–9805 . IEEE , 2019.400 John Schulman , Filip Wolski , Prafulla Dhariwal , Alec Radford , and Oleg Klimov . Proximal Policy401 Optimization Algorithms . arXiv e-prints , art . arXiv:1707.06347 , July 2017.402 Karl Sims . Evolving 3d morphology and behavior by competition . Artificial life , 1 ( 4 ) :353–372,403 1994.404 Emanuel Todorov . Optimality principles in sensorimotor control . Nature neuroscience , 7 ( 9 ) :907–405 915 , 2004.406 Emanuel Todorov and Zoubin Ghahramani . Analysis of the synergies underlying complex hand407 manipulation . In The 26th Annual International Conference of the IEEE Engineering in Medicine408 and Biology Society , volume 2 , pp . 4637–4640 . IEEE , 2004.409 Emanuel Todorov , Tom Erez , and Yuval Tassa . Mujoco : A physics engine for model-based control.410 In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems , pp . 5026–5033.411 IEEE , 2012.412 Ashish Vaswani , Noam Shazeer , Niki Parmar , Jakob Uszkoreit , Llion Jones , Aidan N Gomez,413 Łukasz Kaiser , and Illia Polosukhin . Attention is all you need . In NIPS , 2017.414 Tingwu Wang , Renjie Liao , Jimmy Ba , and Sanja Fidler . Nervenet : Learning structured policy with415 graph neural networks . In ICLR , 2018.416 Tingwu Wang , Yuhao Zhou , Sanja Fidler , and Jimmy Ba . Neural graph evolution : Automatic robot417 design . In ICLR , 2019.418 Ronald J Williams . Simple statistical gradient-following algorithms for connectionist reinforcement419 learning . Machine learning , 8 ( 3-4 ) :229–256 , 1992.420 Mark Yim , Wei-Min Shen , Behnam Salemi , Daniela Rus , Mark Moll , Hod Lipson , Eric Klavins , and421 Gregory S Chirikjian . Modular self-reconfigurable robot systems [ grand challenges of robotics ] .422 IEEE Robotics & Automation Magazine , 14 ( 1 ) :43–52 , 2007.423 Allan Zhao , Jie Xu , Mina Konaković-Luković , Josephine Hughes , Andrew Spielberg , Daniela Rus,424 and Wojciech Matusik . Robogrammar : graph grammar for terrain-optimized robot design . ACM425 Transactions on Graphics ( TOG ) , 39 ( 6 ) :1–16 , 2020.426 A IMPLEMENTATION DETAILS427 In this section , we provide additional implementation details for our method.428 A.1 INPUT OBSERVATIONS429 In our setup , at each time step t ( we drop the time subscript for brevity ) the robot receives an430 observation sk = ( skm , s k p , s k g ) which is composed of the morphology representation ( s k m ) , the pro-431 prioceptive states ( skp ) and additional global sensory information ( s k g ) . We note that prior work432 ( Kurin et al. , 2021 ; Huang et al. , 2020 ) has defined morphology as the connectivity of the limbs.433 However , connectivity is only one aspect of morphology , and in general a substantial amount of434 additional information may be required to adequately describe the morphology . Consider a modular435 robot composed of n ∈ 1 ... Nk modules . For each module skmi consists of : ( 1 ) Module Parame-436 ters : Module shape parameters ( e.g . radius , height ) , material information ( e.g . density ) , and local437 geometric orientation and position of the child module with respect to the parent module . ( 2 ) Joint438 Parameters : This consists of information about joint type ( e.g . hinge ) and its properties ( e.g . joint439 range and axis ) , and actuator type ( e.g . motor ) and its properties ( e.g . gear ) . All this information can440 be found for example , in the Universal Robot Description Format ( URDF ) or in simulator specific441 kinematic trees ( e.g . MuJoCo XML ( Todorov et al. , 2012 ) ) .442 Similarly for each module skpi consists of the instantaneous state of the system : ( 1 ) Module Pro-443 prioception : 3D Cartesian position , 4D quaternion orientation , 3D linear velocity , and 3D angular444 velocity . ( 2 ) Joint Proprioception : Position and velocity in the generalized coordinates . Except the445 root node , each module will be connected to its parent module via a set of joints . We adopt the con-446 vention of providing joint parameters and proprioception information to the child node . We note that447 the importance of proprioceptive observations has also been observed in learning good locomotion448 ( Lee et al. , 2020 ; Kumar et al. , 2021 ) and manipulation policies ( Mandlekar et al. , 2021 ) .449 In general , additional global sensory information ( skg ) can be camera or depth sensor images . To450 save computation , we provide the information about the terrain as 2D heightmap sampled on a non-451 uniform grid . The grid is created by decreasing the sampling density as the distance from the root of452 the body increases . All heights are expressed relative to the height of the ground immediately under453 the root of the agent . The sampling points range from 1m behind the agent to 4m ahead of it along454 the direction of motion , as well as 4m to the left and right.455 A.2 ENVIRONMENTS456 All our training and testing environments are implemented in MuJoCo ( Todorov et al. , 2012 ) . For457 a detailed description about the environments and reward functions please refer Gupta et al . ( 2021 ) .458 Gupta et al . ( 2021 ) evolved UNIMALS in 3 different environments : flat terrain , variable terrain459 and manipulation in variable terrain . For each environment at the end of evolution , they had 100460 task optimized robots . Out of these 300 , we choose a subset of 100 UNIMALS as our train set . We461 ensure that no robot has the exact same kinematic tree or kinematic parameters . Similarly , we created462 a test set of 100 UNIMALS . Finally , for zero shot evaluation experiments we created the variants as463 described in § 5.3 . For creating kinematic variations for joint angles , we randomly selected a joint464 range for each joint from Table 2 which had alteast 50 % overlap with the original joint angle range.465 This helped in preventing robot variants which could not be trained.466 A.3 TRAINING HYPERPARAMETERS467 We use Transformers ( Vaswani et al. , 2017 ) to parametrize both policy and critic networks . Global468 sensory information is encoded using a 2-layer MLP with hidden dimensions [ 64 , 64 ] . We use469 Proximal Policy Optimization ( Schulman et al. , 2017 ) for joint training of agents in all environments.470 All hyperparameters for Transformer and PPO are listed in Table 1 . In addition , we use dynamic471 replay buffer sampling with α = 0.1 and β = 1.0 . For all our ablations and baselines , we use the472 same hyperparameters . For MLP baseline we use a 2-layer MLP with hidden dimensions [ 64 , 64 ] 473 and for GNN baseline we use a 5-layer MLP with hidden dimension 512 . The details of the joint474 training of modular robots are shown in Algorithm 1 . We ensure that all baselines and ablations475 have approximately the same number of parameters ( ∼ 3.3 Million ) .476 A.4 EVALUATION METHODOLOGY477 In general , the performance of RL algorithms is known to be strongly dependent on the choice of478 seed for random number generators ( Henderson et al. , 2018 ) . Hence , we run all our baseline and479 ablation experiments for 3 random seeds . We found that the training curves were robust to the choice480 of seeds as indicated by small standard deviation ( Fig . 4 , 10 ) . Consequently , we evaluate the zero481 shot performance using pre-trained model corresponding to a single seed ( Fig . 6 ) . For our transfer482 learning experiments ( Fig . 7 , 8 ) , we fine tune the model with the random seed corresponding to the483 one used for pre-training ( for all 3 seeds ) .484 B ADDITIONAL EXPERIMENTS485 B.1 BASELINES AND ABLATIONS486 Baselines : We compare against the following baselines:487 ( 1 ) MetaMorph-AO : Direct comparison to Amorpheus ( Kurin et al. , 2021 ) is not feasible due to the488 following reasons : ( 1 ) does not work with 3D morphologies with variable number of joints between489 limbs ; ( 2 ) does not incorporate exteroceptive observations ; ( 3 ) Amorpheus ensures a balanced col-490 lection of experience by sequentially collecting data on each morphology and maintaining a separate491 replay buffer per morphology . Further , updates to policy parameters are also performed sequentially.492 This sequential nature of the algorithm is not amenable to scaling , and would require ∼ 30 GPU493 days to train for 100 million iterations on Nvidia RTX 2080 , while our method only needs 1.5 GPU494 days ( ∼ 20x training speedup ) .495 Hence , we compare with MetaMorph-AO ( Amorpheus Observation ) as the closest variant to Amor-496 pheus , where we provide the same input observations as described in Kurin et al . ( 2021 ) . Specifi-497 cally , we provide the following input observations : one hot encoding of unique limb ID , 3D Carte-498 sian position , 4D quaternion orientation , 3D linear velocity , and 3D angular velocity , position in499 generalized coordinates , and normalized joint angle ranges . Note that although both MetaMorph500 and Amorpheus don ’ t explicitly incorporate the graph structure as input to the policy , our input ob-501 servations include additional morphology information ( see § B.1 ) . In contrast , except for joint angle502 ranges , Amorpheus does not incorporate morphology information.503 ( 2 ) Multi-Task MLP : We train a 6-504 layer MLP with a hidden dimension505 of 512 . The MLP receives the same506 observations as MetaMorph , and has507 the same number of parameters.508 Ablations : We perform additional509 ablation studies to understand the im-510 portance of learnt position embed-511 dings and morphology information512 in input observation . We refer our513 full method as MetaMorph and con-514 sider the following additional ab-515 lations : ( 1 ) MetaMorph-NMT ( No516 Morphology information + Task en-517 coding ) : we only provide sk =518 ( skp , s k g ) as inputs , i.e . the model does519 not have access to information about520 the robot morphology . In addition , we provide a context token with binary encoding to iden-521 tify the morphology ; ( 2 ) MetaMorph-HPE ( Hand-designed Position Encoding ) : we provide sk =522 ( skm , s k p , s k g ) as inputs , with s k m containing a per limb unique ID.523 Fig . 10 shows the learning curves for joint training in the flat terrain environment for 100 training524 morphologies . Multi-task MLP baseline struggles to learn a good policy , which we attribute to the525 difficulty of the task due to the rich diversity of morphologies . We next investigate the role of learnt526 position embeddings and morphological information . MetaMorph-HPE performs slightly better than527 MetaMorph-NPE , indicating that adding unique limb ids improves performance . However , there is528 still a large gap between MetaMorph and MetaMorph-HPE , suggesting that learnt position embed-529 dings capture additional information . Although MetaMorph-AO performs better than MetaMorph-530 NM due to the addition of unique limb ids and joint angle ranges , it is still significantly worse than531 MetaMorph as it does not incorporate morphological information . Finally , MetaMorph-NMT also532 performs poorly due to the lack of morphological information.533 B.2 POSITION EMBEDDING534 To better understand the information captured by position embeddings we visualize their cosine535 similarity . We found that across all seeds and in all environments ( we only show flat terrain for536 brevity ) a diagonal pattern emerged ( Fig 11 ) . Although , this might suggest that learnt position537 embeddings are capturing a unique position identifier , we found that manually specifying a unique538 limb identifier to perform much worse ( see MetaMorph-HPE in Fig . 10 ) . Thus , indicating that these539 embeddings are capturing additional useful information . We believe that further investigation is540 needed to better understand the information captured by position embeddings and defer it to future541 work.542 B.3 KINEMATIC TREE TRAVERSAL ORDER543 We first process an arbitrary robot by creating a 1D sequence of tokens corresponding to the depth544 first traversal of its kinematic tree . We note that the DFS ordering is not unique , and for nodes545 with similar depth , we adopted the convention of visiting the nodes which come first in the MuJoCo546 XML . This convention is also adopted by MuJoCo when parsing the XML . We found that zero-shot547 transfer of the learnt policy was not robust to an opposite ordering of nodes and the performance548 dropped by ∼ 75 % . However , we found that the policy could be made robust to variations in the549 DFS ordering by training the model with a simple data augmentation strategy of randomizing the550 visitation order of nodes at the same depth.551 C LIMITATIONS AND FUTURE WORK552 In this work , we explored how we can learn a universal controller for a large modular robot design553 space . We make a key assumption that we already had access to a large collection of robots which554 were optimized for the task of locomotion . Hence , currently we require a two stage pipeline where555 we first create robots and then pre-train them jointly . An important direction of future work would be556 to combine these two phases in an efficient manner . Moreover , although we found that MetaMorph557 has strong zero-shot generalization to unseen variations in kinematics and dynamics , there is indeed558 a performance drop on zero-shot generalization to new morphologies . Hence , an important avenue559 of exploration is creating algorithms for sample efficient transfer learning . Finally , our current suite560 of tasks focuses primarily on locomotion skills . An important line of future work will involve561 designing general purpose controllers which could perform multiple skills.562 Algorithm 1 MetaMorph : Joint Training of Modular Robots 1 : Input : πθ : policy function Vφ : value function R : replay buffer K : training pool of robots Pk : probability of sampling robot k 2 : Initialize : Learnable parameters θ for πθ , φ for Vφ , P uniform distribution 3 : for i=1,2 , .. Niter do 4 : # Collect robot experience in parallel 5 : for j=1,2 , .. Nworkers do 6 : Sample a robot k ∈ K , according to Equation 5 7 : # Collect one episode of experience 8 : τj ≡ { st , at , st+1 , rt+1 } j ∼ πθ 9 : # Add data to the buffer 10 : R← R ∪ τj 11 : end for 12 : # Update policy and value functions 13 : for j=1,2 , .. Nepochs do 14 : Sample a minibatch of data r from R 15 : πθ ← PPOUpdate ( πθ , r ) 16 : Vφ← PPOUpdate ( Vφ , r ) 17 : end for 18 : # Sample all robots uniformly at random initially 19 : if i > = Nwarmup then 20 : P ← samplingProbUpdate ( R ) , according to Equation 5 21 : end if 22 : end for | This paper proposes a Transformer-based universal policy to control modular robots. Being the space of modular robots combinatorial, it is (almost) impossible to train specific policy for each sample of the "robot distribution". Therefore, the paper proposes a transformer-based architecture design with explicit information about the robot morphology that, trained on a large number of possible morphologies. This universal controller generalizes zero-shot to unseen robot designs belonging to the training distribution and can be efficiently fine-tuned on new robots and tasks. The approach is thoroughly evaluated on different tasks. | SP:516b27b0867e288ba30da79602356fe581d59d8a |
Predictive Maintenance for Optical Networks in Robust Collaborative Learning | 1 INTRODUCTION . Optical fiber networks compose the core of telecommunication infrastructure today due to their high capacity of data transmission . Optical networks rely on fully functional hardware components that run under optimal conditions . In order to reduce the risk of unplanned network interruption and service outage , it is important to predict the degradation of hardware network components correctly using analyzing tools and techniques , by which the maintenance budget and resources are allocated efficiently and timely . Due to the great benefits in industry , global predictive maintenance market is expected to reach more than $ 13 billion by 2026 ( ReportLinker , 2021 ; Simon , 2021 ) . Machine learning ( ML ) based prediction is an emerging method to improve the accuracy of predictive maintenance in manufacturing industry and communication networks . An ML model is trained by the historical data of hardware failure and then the upcoming maintenance is predicted by realtime data gathered through measurement at the edge . ML techniques can be useful if a sufficiently large , diverse , and realistic set of training data exists . Since an ML model relies so heavily on good training data , the availability of such datasets is a crucial requirement for this approach . However , it is challenging to develop a high-precision ML model for predictive maintenance mainly due to the lack of training data . The hardware failures or maintenance events do not occur frequently so that it takes time until good and meaningful training data are collected through the network . Hence , an ML model is often trained using the accelerated aging test results ( e.g . a life cycle under the extreme temperature or the over-powered condition ) that are conducted by hardware manufacturers . Since the components of network equipment are usually produced by small and medium-sized companies , such an ML model is trained based on the limited amount of data that are owned by each manufacturer . This situation can be relieved , if the training dataset can be aggregated from multiple vendors and consolidated in a central location . Since collaborative learning allows to train a model on larger datasets rather than the dataset available in a single vendor , a higher quality and more accurate ML model can be built . However , such collaboration is not straightforward in reality since vendors are not willing to share their training data with external companies . Aging test data are often companyconfidential and trade secret . Moreover , sharing data with foreign companies may be prohibited by privacy protection regulations in their home countries . Federated Learning Federated learning ( FL ) is a framework of enabling distributed parties to work together to train machine learning models without sharing the underlying data or trusting any of the individual participants ( Bonawitz et al. , 2017b ) . FL can be used to build an ML model from various companies for the purpose of predicting the failures , repairs , or maintenance of network systems . With the FL technique , the training data is not required to be centralized , but can instead remains with the data owners . Each vendor trains an ML model on their private data and using their own hardware . These models are then aggregated by a central server ( e.g . a network operator ) to build a unified global model that has learned from the private data of every vendor without ever directly accessing it . Hence , confidential training data ( e.g . aging test results of products ) are not visible to a server , nor other competitive vendors . Secure aggregation Secure aggregation in FL is a cryptographic protocol that enables each vendor to submit a local model securely and a server learns nothing but the sum of the local models . A secure aggregation method for mobile networks was presented in Bonawitz et al . ( 2017b ) and Bell et al . ( 2020 ) . This method relies on a pairwise secret exchange and Shamir ’ s t-out-of-n secret sharing scheme , focusing on the setting of mobile devices where communication is extremely expensive , and dropouts are common . There is a rich literature exploring secure aggregation in both the single-server setting ( via additive masking Bonawitz et al . ( 2016 ) , via threshold homomorphic encryption ( HE ) Halevi et al . ( 2011 ) , and via generic secure multi-party computation ( MPC ) Burkhart et al . ( 2010 ) ) as well as in the multiple non-colluding servers setting ( Corrigan-Gibbs & Boneh , 2017 ) . For instance , one can perform all computations using a fully homomorphic encryption scheme resulting in low communication but very high computation , or using classical MPC techniques with more communication but less computation . Other works use a hybrid of both and thus enjoy further improvement in performance ( Juvekar et al. , 2018 ; Mishra et al. , 2020 ) . Nevertheless , it is still an open question how to construct a secure and robust aggregation protocol that addresses all the challenges . Our contribution In this paper , we propose a secure and robust collaborative learning method using cross-vendor datasets for predictive maintenance in optical networks . Each vendor builds a local model using its own training dataset and uploads it to the server . The private dataset remains in the vendor ’ s domain and is never exposed to other companies . A server builds a global ML model by aggregating local ML models iteratively and averaging them to form an updated global model proportional to the size of dataset . Using the global model , the potential risk of hardware failure and corresponding maintenance events are predicted and the necessary resources are proactively prepared to run optical networks without disruption . In our framework , a secure aggregation protocol is tolerant to the malicious behavior of participants in a honest-majority model ; that is , a server and majority of vendors are assumed to be honest yet some may be malicious or unreliable . Compared to the original FL , the local models are not many , and the dropouts are very rare in our framework . Furthermore , an updated global model is not shared with vendors . The reason is that , while a global model is a valuable asset to the network management , it is not really beneficial to the vendors . Instead , each vendor receives the personalized maintenance report which contains the discrepancy between its local model and the global model , which is useful to improve the quality of products in the future . Fig . 1 shows that an example of the ML-based predictive maintenance process in FL under the assumption that a single vendor behaves maliciously . Related work In Bonawitz et al . ( 2017a ) , a practical secure aggregation technique in an FL setting was proposed over large mobile networks . Such framework does not fit for our use case due to multiple reasons . Firstly , in our use case , a global model is not shared with data owners ( vendors ) . Each vendor gets benefit by receiving an individual maintenance result ( e.g . the difference between the prediction and the real failure ) after the global model is deployed and hardware degradation is predicted . Secondly , the scalability is not important since the number of vendors are not very large and dropouts are expected to be rare . On the other hand , secure aggregation is critical since the disclosure of the private training dataset may give negative impact on the data owner ’ s business . Another interesting work on collaborative predictive maintenance was presented in Mohr et al . ( 2021 ) , where a combination of blockchain and federated learning techniques was applied . We apply a multi-party computation technique for data privacy since it is more suitable for our use case . More recently , in Zheng et al . ( 2021 ) , an end-to-end platform for collaborative learning using MPC is proposed . Though it is an interesting approach , it is unlikely that this platform can be applied to our use case since the collaborative learning through the use of release policies and auditing is not preferable to the predictive maintenance . The rest of this paper is structured as follows : The use case and methods of ML-based predictive maintenance are presented . Then , the threat scenarios and defending methods are presented . Our experimental results are presented and the conclusion is given . 2 ML-BASED PREDICTIVE MAINTENANCE . 2.1 USE CASE : OPTICAL TRANSMITTER DEGRADATION PREDICTION . Semiconductor lasers are considered as one of the most commonly used optical transmitters for optical communication system thanks to their high efficiency , low cost , and compactness . They have been rapidly evolved to meet the requirements of the next generation optical network in terms of high speed , low power consumption etc . However , during operation the performance of the laser can be adversely impacted by several factors such as contamination , facet oxidation etc . Such factors are hard to predict and cause the laser degradation and failure , and thereby resulting in optical network disruption and high maintenance costs . Therefore , it is highly required to predict the degradation of the semiconductor laser device after its deployment in optical communication system in order to enhance the system reliability and minimize the downtime costs . ML techniques could provide a great potential to tackle the laser degradation prediction problem ( Abdelli et al. , 2020 ) . The development of such prognostic methods requires the availability of run-to-failure data sets modelling both the normal operation behavior and the degradation process under different operating conditions . However , such data is often unavailable due the scarcity of the failures during the system operation and the long time required to monitor the device up failing and then generating the reliability data . That ’ s why accelerated aging tests are often adopted to collect run-to-failure data in shorter time by speeding up the device degradation by applying accelerated stress conditions resulting in the same degradation process leading to failure ( Celaya et al. , 2011 ) . However , the burn-in aging tests are carried out just for few devices due to the high costs of performing such tests . Hence , the amount of the run-to-failure data that can be derived from such tests , might be small , which can adversely affect the performance of ML model ( Abdelli et al. , 2021 ) . Therefore , a FL approach is considered as a promising candidate to address the aforementioned problem , whereby different semiconductor laser manufacturers ( i.e vendors ) collaborate with their small local dataset , stored at their premise , in order to build an accurate and reliable global laser degradation prediction model with good generalization and robustness capabilities . Note that the semiconductor laser manufacturers might have different types of laser devices with various characteristics leading to different degradation trends , and that the data owned by each vendor is derived from aging tests conducted under different operating conditions . State that the global model is run in a server hosted by an optical network operator owning the infrastructure in which the semiconductor lasers manufactured by the different vendors are deployed . We consider a FL system composed of a server and N clients ( i.e vendors ) that collaboratively train a global model using the FedAvg algorithm ( McMahan et al. , 2017 ) . The clients send securely the local model weight updates to the server using MPC . The autoencoder based on gated recurrent unit ( GRU ) is used as global model to solve the task of semiconductor laser degradation prediction . A convolutional autoencoder implemented at the server is adopted as an anomaly detection method to detect the anomalous weights sent by the malicious clients . | This paper uses federated machine learning for predictive maintenance on optical networks. Federated learning provides a number of advantages, including security, privacy, and accuracy. The accuracy claims are motivated by having a broader set of failure examples to draw from, addressing a known issue in predictive maintenance. Privacy and security claims are important in predictive maintenance, but were less novel or well supported - they felt more like off the shelf methods applied in a new domain. Their primary comparison was against a global model. | SP:03927e9924258d065475e45ebeedc315e8f7e325 |
Predictive Maintenance for Optical Networks in Robust Collaborative Learning | 1 INTRODUCTION . Optical fiber networks compose the core of telecommunication infrastructure today due to their high capacity of data transmission . Optical networks rely on fully functional hardware components that run under optimal conditions . In order to reduce the risk of unplanned network interruption and service outage , it is important to predict the degradation of hardware network components correctly using analyzing tools and techniques , by which the maintenance budget and resources are allocated efficiently and timely . Due to the great benefits in industry , global predictive maintenance market is expected to reach more than $ 13 billion by 2026 ( ReportLinker , 2021 ; Simon , 2021 ) . Machine learning ( ML ) based prediction is an emerging method to improve the accuracy of predictive maintenance in manufacturing industry and communication networks . An ML model is trained by the historical data of hardware failure and then the upcoming maintenance is predicted by realtime data gathered through measurement at the edge . ML techniques can be useful if a sufficiently large , diverse , and realistic set of training data exists . Since an ML model relies so heavily on good training data , the availability of such datasets is a crucial requirement for this approach . However , it is challenging to develop a high-precision ML model for predictive maintenance mainly due to the lack of training data . The hardware failures or maintenance events do not occur frequently so that it takes time until good and meaningful training data are collected through the network . Hence , an ML model is often trained using the accelerated aging test results ( e.g . a life cycle under the extreme temperature or the over-powered condition ) that are conducted by hardware manufacturers . Since the components of network equipment are usually produced by small and medium-sized companies , such an ML model is trained based on the limited amount of data that are owned by each manufacturer . This situation can be relieved , if the training dataset can be aggregated from multiple vendors and consolidated in a central location . Since collaborative learning allows to train a model on larger datasets rather than the dataset available in a single vendor , a higher quality and more accurate ML model can be built . However , such collaboration is not straightforward in reality since vendors are not willing to share their training data with external companies . Aging test data are often companyconfidential and trade secret . Moreover , sharing data with foreign companies may be prohibited by privacy protection regulations in their home countries . Federated Learning Federated learning ( FL ) is a framework of enabling distributed parties to work together to train machine learning models without sharing the underlying data or trusting any of the individual participants ( Bonawitz et al. , 2017b ) . FL can be used to build an ML model from various companies for the purpose of predicting the failures , repairs , or maintenance of network systems . With the FL technique , the training data is not required to be centralized , but can instead remains with the data owners . Each vendor trains an ML model on their private data and using their own hardware . These models are then aggregated by a central server ( e.g . a network operator ) to build a unified global model that has learned from the private data of every vendor without ever directly accessing it . Hence , confidential training data ( e.g . aging test results of products ) are not visible to a server , nor other competitive vendors . Secure aggregation Secure aggregation in FL is a cryptographic protocol that enables each vendor to submit a local model securely and a server learns nothing but the sum of the local models . A secure aggregation method for mobile networks was presented in Bonawitz et al . ( 2017b ) and Bell et al . ( 2020 ) . This method relies on a pairwise secret exchange and Shamir ’ s t-out-of-n secret sharing scheme , focusing on the setting of mobile devices where communication is extremely expensive , and dropouts are common . There is a rich literature exploring secure aggregation in both the single-server setting ( via additive masking Bonawitz et al . ( 2016 ) , via threshold homomorphic encryption ( HE ) Halevi et al . ( 2011 ) , and via generic secure multi-party computation ( MPC ) Burkhart et al . ( 2010 ) ) as well as in the multiple non-colluding servers setting ( Corrigan-Gibbs & Boneh , 2017 ) . For instance , one can perform all computations using a fully homomorphic encryption scheme resulting in low communication but very high computation , or using classical MPC techniques with more communication but less computation . Other works use a hybrid of both and thus enjoy further improvement in performance ( Juvekar et al. , 2018 ; Mishra et al. , 2020 ) . Nevertheless , it is still an open question how to construct a secure and robust aggregation protocol that addresses all the challenges . Our contribution In this paper , we propose a secure and robust collaborative learning method using cross-vendor datasets for predictive maintenance in optical networks . Each vendor builds a local model using its own training dataset and uploads it to the server . The private dataset remains in the vendor ’ s domain and is never exposed to other companies . A server builds a global ML model by aggregating local ML models iteratively and averaging them to form an updated global model proportional to the size of dataset . Using the global model , the potential risk of hardware failure and corresponding maintenance events are predicted and the necessary resources are proactively prepared to run optical networks without disruption . In our framework , a secure aggregation protocol is tolerant to the malicious behavior of participants in a honest-majority model ; that is , a server and majority of vendors are assumed to be honest yet some may be malicious or unreliable . Compared to the original FL , the local models are not many , and the dropouts are very rare in our framework . Furthermore , an updated global model is not shared with vendors . The reason is that , while a global model is a valuable asset to the network management , it is not really beneficial to the vendors . Instead , each vendor receives the personalized maintenance report which contains the discrepancy between its local model and the global model , which is useful to improve the quality of products in the future . Fig . 1 shows that an example of the ML-based predictive maintenance process in FL under the assumption that a single vendor behaves maliciously . Related work In Bonawitz et al . ( 2017a ) , a practical secure aggregation technique in an FL setting was proposed over large mobile networks . Such framework does not fit for our use case due to multiple reasons . Firstly , in our use case , a global model is not shared with data owners ( vendors ) . Each vendor gets benefit by receiving an individual maintenance result ( e.g . the difference between the prediction and the real failure ) after the global model is deployed and hardware degradation is predicted . Secondly , the scalability is not important since the number of vendors are not very large and dropouts are expected to be rare . On the other hand , secure aggregation is critical since the disclosure of the private training dataset may give negative impact on the data owner ’ s business . Another interesting work on collaborative predictive maintenance was presented in Mohr et al . ( 2021 ) , where a combination of blockchain and federated learning techniques was applied . We apply a multi-party computation technique for data privacy since it is more suitable for our use case . More recently , in Zheng et al . ( 2021 ) , an end-to-end platform for collaborative learning using MPC is proposed . Though it is an interesting approach , it is unlikely that this platform can be applied to our use case since the collaborative learning through the use of release policies and auditing is not preferable to the predictive maintenance . The rest of this paper is structured as follows : The use case and methods of ML-based predictive maintenance are presented . Then , the threat scenarios and defending methods are presented . Our experimental results are presented and the conclusion is given . 2 ML-BASED PREDICTIVE MAINTENANCE . 2.1 USE CASE : OPTICAL TRANSMITTER DEGRADATION PREDICTION . Semiconductor lasers are considered as one of the most commonly used optical transmitters for optical communication system thanks to their high efficiency , low cost , and compactness . They have been rapidly evolved to meet the requirements of the next generation optical network in terms of high speed , low power consumption etc . However , during operation the performance of the laser can be adversely impacted by several factors such as contamination , facet oxidation etc . Such factors are hard to predict and cause the laser degradation and failure , and thereby resulting in optical network disruption and high maintenance costs . Therefore , it is highly required to predict the degradation of the semiconductor laser device after its deployment in optical communication system in order to enhance the system reliability and minimize the downtime costs . ML techniques could provide a great potential to tackle the laser degradation prediction problem ( Abdelli et al. , 2020 ) . The development of such prognostic methods requires the availability of run-to-failure data sets modelling both the normal operation behavior and the degradation process under different operating conditions . However , such data is often unavailable due the scarcity of the failures during the system operation and the long time required to monitor the device up failing and then generating the reliability data . That ’ s why accelerated aging tests are often adopted to collect run-to-failure data in shorter time by speeding up the device degradation by applying accelerated stress conditions resulting in the same degradation process leading to failure ( Celaya et al. , 2011 ) . However , the burn-in aging tests are carried out just for few devices due to the high costs of performing such tests . Hence , the amount of the run-to-failure data that can be derived from such tests , might be small , which can adversely affect the performance of ML model ( Abdelli et al. , 2021 ) . Therefore , a FL approach is considered as a promising candidate to address the aforementioned problem , whereby different semiconductor laser manufacturers ( i.e vendors ) collaborate with their small local dataset , stored at their premise , in order to build an accurate and reliable global laser degradation prediction model with good generalization and robustness capabilities . Note that the semiconductor laser manufacturers might have different types of laser devices with various characteristics leading to different degradation trends , and that the data owned by each vendor is derived from aging tests conducted under different operating conditions . State that the global model is run in a server hosted by an optical network operator owning the infrastructure in which the semiconductor lasers manufactured by the different vendors are deployed . We consider a FL system composed of a server and N clients ( i.e vendors ) that collaboratively train a global model using the FedAvg algorithm ( McMahan et al. , 2017 ) . The clients send securely the local model weight updates to the server using MPC . The autoencoder based on gated recurrent unit ( GRU ) is used as global model to solve the task of semiconductor laser degradation prediction . A convolutional autoencoder implemented at the server is adopted as an anomaly detection method to detect the anomalous weights sent by the malicious clients . | This paper designs a maintenance prediction framework for key optical network components based on federated learning. The designed framework can resist malicious environments and several kinds of attacks. The paper uses simulation data to verify that the proposed method has good predictive ability and can withstand the designed simulation attack. | SP:03927e9924258d065475e45ebeedc315e8f7e325 |
Finetuned Language Models are Zero-Shot Learners | We take a 137B parameter pretrained language model and instruction tune it on over 60 NLP datasets verbalized via natural language instruction templates . We evaluate this instruction-tuned model , which we call FLAN , on unseen task types . FLAN substantially improves the performance of its unmodified counterpart and surpasses zero-shot 175B GPT-3 on 20 of 25 datasets that we evaluate . FLAN even outperforms few-shot GPT-3 by a large margin on ANLI , RTE , BoolQ , AI2-ARC , OpenbookQA , and StoryCloze . Ablation studies reveal that number of finetuning datasets , model scale , and natural language instructions are key to the success of instruction tuning . ∗Lead contributors . Author contributions listed at end of paper . 1 INTRODUCTION . Language models ( LMs ) at scale , such as GPT-3 ( Brown et al. , 2020 ) , have been shown to perform few-shot learning remarkably well . They are less successful at zero-shot learning , however . For example , GPT-3 ’ s zero-shot performance is much worse than few-shot performance on tasks such as reading comprehension , question answering , and natural language inference . One potential reason is that , without few-shot exemplars , it is harder for models to perform well on prompts that are not similar to the format of the pretraining data . In this paper , we explore a simple method to improve the zero-shot performance of large language models , which would expand their reach to a broader audience . We leverage the intuition that NLP tasks can be described via natural language instructions , such as “ Is the sentiment of this movie review positive or negative ? ” or “ Translate ‘ how are you ’ into Chinese. ” We take a pretrained language model of 137B parameters and perform instruction tuning—finetuning the model on a mixture of more than 60 NLP datasets expressed via natural language instructions . We refer to this resulting model as FLAN , for Finetuned Language Net . To evaluate the zero-shot performance of FLAN on unseen tasks , we group NLP datasets into clusters based on their task types and hold out each cluster for evaluation while instruction tuning FLAN on all other clusters . For example , as shown in Figure 1 , to evaluate FLAN ’ s ability to perform natural language inference , we instruction tune the model on a range of other NLP tasks such as commonsense reasoning , translation , and sentiment analysis . As this setup ensures that FLAN has not seen any natural language inference tasks in instruction tuning , we then evaluate its ability to perform zero-shot natural language inference . Our evaluations show that FLAN substantially improves the zero-shot performance of the base 137B-parameter model . FLAN ’ s zero-shot also outperforms 175B-parameter GPT-3 ’ s zero-shot on 20 of 25 datasets that we evaluate , and even outperforms GPT-3 ’ s few-shot by a large margin on ANLI , RTE , BoolQ , AI2-ARC , OpenbookQA , and StoryCloze . In ablation studies , we find that increasing the number of task clusters in instruction tuning improves performance on unseen tasks and that the benefits of instruction tuning emerge only with sufficient model scale . Instruction tuning is a simple method that , as depicted in Figure 2 , combines appealing aspects of both the pretrain–finetune and prompting paradigms by using supervision via finetuning to improve language model ’ s responses to inference-time text interactions . Our empirical results demonstrate promising abilities of language models to perform tasks described purely via instructions . Source code for loading the instruction tuning dataset used for FLAN is publicly available at https : //github.com/google-research/flan . 2 FLAN : INSTRUCTION TUNING IMPROVES ZERO-SHOT LEARNING . The motivation of instruction tuning is to improve the ability of language models to respond to NLP instructions . The idea is that by using supervision to teach an LM to perform tasks described via instructions , the LM will learn to follow instructions and do so even for unseen tasks . To evaluate performance on unseen tasks , we group datasets into clusters by task type and hold out each task cluster for evaluation while instruction tuning on all remaining clusters . 2.1 TASKS & TEMPLATES . As creating an instruction tuning dataset with many tasks from scratch would be resource-intensive , we transform existing datasets from the research community into an instructional format . We aggregate 62 text datasets that are publicly available on Tensorflow Datasets , including both language understanding and language generation tasks , into a single mixture . Figure 3 shows these datasets— each dataset is categorized into one of twelve task clusters , for which datasets in a given cluster are of the same task type . Descriptions , sizes , and examples of each dataset are shown in Appendix G . TREC CoLA Math . For each dataset , we manually compose ten unique templates that use natural language instructions to describe the task for that dataset . While most of the ten templates describe the original task , to increase diversity , for each dataset we also include up to three templates that “ turned the task around , ” ( e.g. , for sentiment classification we include templates asking to generate a movie review ) . We then instruction tune a pretrained language model on the mixture of all datasets , with examples in each dataset formatted via a randomly selected instruction template for that dataset . Figure 4 shows multiple instruction templates for a natural language inference dataset . Entailment Not entailment Russian cosmonaut Valery Polyakov set the record for the longest continuous amount of time spent in space , a staggering 438 days , between 1994 and 1995 . Premise Russians hold the record for the longest stay in space . Hypothesis Target Options : - yes - no < premise > Can we infer the following ? < hypothesis > < options > Template 2 Template 1 < premise > Based on the paragraph above , can we conclude that < hypothesis > ? < options > Read the following and determine if the hypothesis can be inferred from the premise : Premise : < premise > Hypothesis : < hypothesis > < options > Template 3 Template 4 , … Figure 4 : Multiple instruction templates describing a natural language inference task . 2.2 EVALUATION SPLITS . We are interested in how FLAN performs on tasks not seen in instruction tuning , and so it is crucial to define what counts as an unseen task . Whereas some prior work defines unseen tasks by disallowing the same dataset to appear in training , we use a more conservative definition that leverages the task clusters from Figure 3 . In this work , we only consider dataset D unseen at evaluation time if no datasets from any task clusters that D belongs to were seen during instruction tuning . For instance , if D is an entailment task , then no entailment datasets appeared in instruction tuning , and we instruction-tuned on all other clusters.1 Hence , to evaluate zero-shot FLAN on c task clusters , we instruction tune c models , where each model holds out a different task cluster for evaluation . 1When evaluating on the read . comp . with commonsense cluster , both read . comp . and commonsense reasoning were dropped from instruction tuning . Conversely , the read . comp . with commonsense cluster was not used for instruction tuning when evaluating on read . comp . or commonsense reasoning . We also drop the paraphrase cluster from instruction tuning when evaluating on NLI tasks and vice-versa . 2.3 CLASSIFICATION WITH OPTIONS . The output space for a given task is either one of several classes ( classification ) or free text ( generation ) . As FLAN is an instruction-tuned version of a decoder-only language model , it naturally responds in free text , and so no further modifications are needed for generation tasks . For classification tasks , prior work ( Brown et al. , 2020 ) used a rank classification approach where , for example , only two outputs ( “ yes ” and “ no ” ) are considered and the higher probability one is taken as the model ’ s prediction . Though this procedure is logically sound , it is imperfect in that the probability mass for answers may have an undesired distribution among ways of saying each answer ( e.g. , a large number of alternative ways of saying “ yes ” may lower the probability mass assigned to “ yes ” ) . Therefore , we include an options suffix , in which we append the token OPTIONS to the end of a classification task along with a list of the output classes for that task . This makes the model aware of which choices are desired when responding to classification tasks . Example use of options is shown in the NLI and commonsense examples in Figure 1 . 2.4 TRAINING DETAILS . Model architecture and pretraining . In our experiments , we use LaMDA-PT , a dense left-to-right , decoder-only transformer language model of 137B parameters ( Thoppilan et al. , 2022 ) . This model is pretrained on a collection of web documents ( including those with computer code ) , dialog data , and Wikipedia , tokenized into 2.49T BPE tokens with a 32k vocabulary using the SentencePiece library ( Kudo & Richardson , 2018 ) . Around 10 % of the pretraining data was non-English . Note that LaMDA-PT only has language model pretraining ( c.f . LaMDA , which was finetuned for dialog ) . Instruction tuning procedure . FLAN is the instruction-tuned version of LaMDA-PT . Our instruction tuning pipeline mixes all datasets and randomly samples from each dataset . To balance the different sizes of datasets , we limit the number of training examples per dataset to 30k and follow the examples-proportional mixing scheme ( Raffel et al. , 2020 ) with a mixing rate maximum of 3k.2 We finetune all models for 30k gradient steps with a batch size of 8,192 tokens using the Adafactor Optimizer ( Shazeer & Stern , 2018 ) with a learning rate of 3e-5 . The input and target sequence lengths used in finetuning are 1024 and 256 , respectively . We use packing ( Raffel et al. , 2020 ) to combine multiple training examples into a single sequence , separating inputs from targets using a special EOS token . This instruction tuning takes around 60 hours on a TPUv3 with 128 cores . For all evaluations , we report results on the final checkpoint trained for 30k steps . 3 RESULTS . We evaluate FLAN on natural language inference , reading comprehension , closed-book QA , translation , commonsense reasoning , coreference resolution , and struct-to-text . As described in §2.2 , we evaluate on unseen tasks by grouping datasets into task clusters and holding out each cluster for evaluation while instruction tuning on all remaining clusters ( i.e. , each evaluation task cluster uses a different checkpoint ) . For each dataset , we evaluate the mean of performance on all templates , which proxies the expected performance given a typical natural language instruction . As a dev set is sometimes available for manual prompt engineering ( Brown et al. , 2020 ) , for each dataset we also obtain the test set performance using the template with the best dev set performance . For comparison , we report zero and few-shot results for LaMDA-PT using the same prompts as GPT-3 ( as LaMDA-PT is not suitable for natural instructions without instruction tuning ) . This baseline provides the most direct ablation of how much instruction tuning helps . Instruction tuning significantly improves LaMDA-PT on most datasets . We also show the zero-shot performances of GPT-3 175B ( Brown et al. , 2020 ) and GLaM 64B/64E ( Du et al. , 2021 ) , as reported in their respective papers . With the best dev template , zero-shot FLAN outperforms zero-shot GPT-3 on 20 of 25 datasets and even surpasses GPT-3 ’ s few-shot performance on 10 datasets . With the best dev-template , zero-shot FLAN outperforms zero-shot GLaM on 13 of 19 available datasets and one-shot GLaM on 11 of 19 datasets . 2In this mixing scheme , a mixing rate maximum of 3,000 means that a dataset does not receive additional sampling weight for examples in excess of 3,000 . Overall , we observe that instruction tuning is very effective on tasks naturally verbalized as instructions ( e.g. , NLI , QA , translation , struct-to-text ) and is less effective on tasks directly formulated as language modeling , where instructions would be largely redundant ( e.g. , commonsense reasoning and coreference resolution tasks that are formatted as finishing an incomplete sentence or paragraph ) . Results on natural language inference , reading comprehension , closed-book QA , and translation are summarized in Figure 5 and described below . Natural language inference ( NLI ) . On five NLI datasets , where a model must determine whether a hypothesis is true given some premise , FLAN outperforms all baselines by a large margin . As noted by Brown et al . ( 2020 ) , perhaps one reason why GPT-3 struggles with NLI is that NLI examples are unlikely to have appeared naturally in an unsupervised training set and are thus awkwardly phrased as a continuation of a sentence . For FLAN , we phrase NLI as the more natural question “ Does < premise > mean that < hypothesis > ? ” , achieving much higher performance . Reading comprehension . On reading comprehension , where models are asked to answer a question about a provided passage , FLAN outperforms baselines for MultiRC ( Khashabi et al. , 2018 ) and OBQA ( Mihaylov et al. , 2018 ) . On BoolQ ( Clark et al. , 2019a ) , FLAN outperforms GPT-3 by a large margin , though LaMDA-PT already achieves high performance on BoolQ . Closed-book QA . For closed-book QA , which asks models to answer questions about the world without access to specific information containing the answer , FLAN outperforms GPT-3 on all four datasets . Compared to GLaM , FLAN has better performance on ARC-e and ARC-c ( Clark et al. , 2018 ) , and slightly lower performance on NQ ( Lee et al. , 2019 ; Kwiatkowski et al. , 2019 ) and TQA ( Joshi et al. , 2017 ) . Translation . Similar to GPT-3 , the training data for LaMDA-PT is around 90 % English and includes some text in other languages that was not specifically used to train the model to perform machine translation . We also evaluate FLAN ’ s performance on machine translation for the three datasets evaluated in the GPT-3 paper : French–English from WMT ’ 14 ( Bojar et al. , 2014 ) , and German– English and Romanian–English from WMT ’ 16 ( Bojar et al. , 2016 ) . Compared with GPT-3 , FLAN outperforms zero-shot GPT-3 for all six evaluations , though it underperforms few-shot GPT-3 in most cases . Similar to GPT-3 , FLAN shows strong results for translating into English and compares favorably against supervised translation baselines . Translating from English into other languages , however , was relatively weaker , as might be expected given that FLAN uses an English sentencepiece tokenizer and that the majority of pretraining data is English . Additional tasks . Although we see strong results for the above task clusters , one limitation with instruction tuning is that it does not improve performance for many language modeling tasks ( e.g. , commonsense reasoning or coreference resolution tasks formulated as sentence completions ) . For seven commonsense reasoning and coreference resolution tasks ( see Table 2 in the Appendix ) , FLAN only outperforms LaMDA-PT on three of the seven tasks . This negative result indicates that when the downstream task is the same as the original language modeling pre-training objective ( i.e. , in cases where instructions are largely redundant ) , instruction tuning is not useful . Finally , we report results for sentiment analysis , paraphrase detection , and struct-to-text , as well as additional datasets for which GPT-3 results are not available , in Table 2 and Table 1 in the Appendix . Generally , zero-shot FLAN outperforms zero-shot LaMDA-PT and is comparable with or better than few-shot LaMDA-PT . | The paper explores a simple and effective method to improve zero-shot performance of pretrained language models. Authors take a 137B parameter pretrained model and finetune it on multiple tasks verbalized via natural language instruction templates. As the result, the instruction-tuned model performs well on un-seen tasks with the zero-shot setting. | SP:60232b35685b12a1aa583e9b2ef650eafb7bfcc0 |
Finetuned Language Models are Zero-Shot Learners | We take a 137B parameter pretrained language model and instruction tune it on over 60 NLP datasets verbalized via natural language instruction templates . We evaluate this instruction-tuned model , which we call FLAN , on unseen task types . FLAN substantially improves the performance of its unmodified counterpart and surpasses zero-shot 175B GPT-3 on 20 of 25 datasets that we evaluate . FLAN even outperforms few-shot GPT-3 by a large margin on ANLI , RTE , BoolQ , AI2-ARC , OpenbookQA , and StoryCloze . Ablation studies reveal that number of finetuning datasets , model scale , and natural language instructions are key to the success of instruction tuning . ∗Lead contributors . Author contributions listed at end of paper . 1 INTRODUCTION . Language models ( LMs ) at scale , such as GPT-3 ( Brown et al. , 2020 ) , have been shown to perform few-shot learning remarkably well . They are less successful at zero-shot learning , however . For example , GPT-3 ’ s zero-shot performance is much worse than few-shot performance on tasks such as reading comprehension , question answering , and natural language inference . One potential reason is that , without few-shot exemplars , it is harder for models to perform well on prompts that are not similar to the format of the pretraining data . In this paper , we explore a simple method to improve the zero-shot performance of large language models , which would expand their reach to a broader audience . We leverage the intuition that NLP tasks can be described via natural language instructions , such as “ Is the sentiment of this movie review positive or negative ? ” or “ Translate ‘ how are you ’ into Chinese. ” We take a pretrained language model of 137B parameters and perform instruction tuning—finetuning the model on a mixture of more than 60 NLP datasets expressed via natural language instructions . We refer to this resulting model as FLAN , for Finetuned Language Net . To evaluate the zero-shot performance of FLAN on unseen tasks , we group NLP datasets into clusters based on their task types and hold out each cluster for evaluation while instruction tuning FLAN on all other clusters . For example , as shown in Figure 1 , to evaluate FLAN ’ s ability to perform natural language inference , we instruction tune the model on a range of other NLP tasks such as commonsense reasoning , translation , and sentiment analysis . As this setup ensures that FLAN has not seen any natural language inference tasks in instruction tuning , we then evaluate its ability to perform zero-shot natural language inference . Our evaluations show that FLAN substantially improves the zero-shot performance of the base 137B-parameter model . FLAN ’ s zero-shot also outperforms 175B-parameter GPT-3 ’ s zero-shot on 20 of 25 datasets that we evaluate , and even outperforms GPT-3 ’ s few-shot by a large margin on ANLI , RTE , BoolQ , AI2-ARC , OpenbookQA , and StoryCloze . In ablation studies , we find that increasing the number of task clusters in instruction tuning improves performance on unseen tasks and that the benefits of instruction tuning emerge only with sufficient model scale . Instruction tuning is a simple method that , as depicted in Figure 2 , combines appealing aspects of both the pretrain–finetune and prompting paradigms by using supervision via finetuning to improve language model ’ s responses to inference-time text interactions . Our empirical results demonstrate promising abilities of language models to perform tasks described purely via instructions . Source code for loading the instruction tuning dataset used for FLAN is publicly available at https : //github.com/google-research/flan . 2 FLAN : INSTRUCTION TUNING IMPROVES ZERO-SHOT LEARNING . The motivation of instruction tuning is to improve the ability of language models to respond to NLP instructions . The idea is that by using supervision to teach an LM to perform tasks described via instructions , the LM will learn to follow instructions and do so even for unseen tasks . To evaluate performance on unseen tasks , we group datasets into clusters by task type and hold out each task cluster for evaluation while instruction tuning on all remaining clusters . 2.1 TASKS & TEMPLATES . As creating an instruction tuning dataset with many tasks from scratch would be resource-intensive , we transform existing datasets from the research community into an instructional format . We aggregate 62 text datasets that are publicly available on Tensorflow Datasets , including both language understanding and language generation tasks , into a single mixture . Figure 3 shows these datasets— each dataset is categorized into one of twelve task clusters , for which datasets in a given cluster are of the same task type . Descriptions , sizes , and examples of each dataset are shown in Appendix G . TREC CoLA Math . For each dataset , we manually compose ten unique templates that use natural language instructions to describe the task for that dataset . While most of the ten templates describe the original task , to increase diversity , for each dataset we also include up to three templates that “ turned the task around , ” ( e.g. , for sentiment classification we include templates asking to generate a movie review ) . We then instruction tune a pretrained language model on the mixture of all datasets , with examples in each dataset formatted via a randomly selected instruction template for that dataset . Figure 4 shows multiple instruction templates for a natural language inference dataset . Entailment Not entailment Russian cosmonaut Valery Polyakov set the record for the longest continuous amount of time spent in space , a staggering 438 days , between 1994 and 1995 . Premise Russians hold the record for the longest stay in space . Hypothesis Target Options : - yes - no < premise > Can we infer the following ? < hypothesis > < options > Template 2 Template 1 < premise > Based on the paragraph above , can we conclude that < hypothesis > ? < options > Read the following and determine if the hypothesis can be inferred from the premise : Premise : < premise > Hypothesis : < hypothesis > < options > Template 3 Template 4 , … Figure 4 : Multiple instruction templates describing a natural language inference task . 2.2 EVALUATION SPLITS . We are interested in how FLAN performs on tasks not seen in instruction tuning , and so it is crucial to define what counts as an unseen task . Whereas some prior work defines unseen tasks by disallowing the same dataset to appear in training , we use a more conservative definition that leverages the task clusters from Figure 3 . In this work , we only consider dataset D unseen at evaluation time if no datasets from any task clusters that D belongs to were seen during instruction tuning . For instance , if D is an entailment task , then no entailment datasets appeared in instruction tuning , and we instruction-tuned on all other clusters.1 Hence , to evaluate zero-shot FLAN on c task clusters , we instruction tune c models , where each model holds out a different task cluster for evaluation . 1When evaluating on the read . comp . with commonsense cluster , both read . comp . and commonsense reasoning were dropped from instruction tuning . Conversely , the read . comp . with commonsense cluster was not used for instruction tuning when evaluating on read . comp . or commonsense reasoning . We also drop the paraphrase cluster from instruction tuning when evaluating on NLI tasks and vice-versa . 2.3 CLASSIFICATION WITH OPTIONS . The output space for a given task is either one of several classes ( classification ) or free text ( generation ) . As FLAN is an instruction-tuned version of a decoder-only language model , it naturally responds in free text , and so no further modifications are needed for generation tasks . For classification tasks , prior work ( Brown et al. , 2020 ) used a rank classification approach where , for example , only two outputs ( “ yes ” and “ no ” ) are considered and the higher probability one is taken as the model ’ s prediction . Though this procedure is logically sound , it is imperfect in that the probability mass for answers may have an undesired distribution among ways of saying each answer ( e.g. , a large number of alternative ways of saying “ yes ” may lower the probability mass assigned to “ yes ” ) . Therefore , we include an options suffix , in which we append the token OPTIONS to the end of a classification task along with a list of the output classes for that task . This makes the model aware of which choices are desired when responding to classification tasks . Example use of options is shown in the NLI and commonsense examples in Figure 1 . 2.4 TRAINING DETAILS . Model architecture and pretraining . In our experiments , we use LaMDA-PT , a dense left-to-right , decoder-only transformer language model of 137B parameters ( Thoppilan et al. , 2022 ) . This model is pretrained on a collection of web documents ( including those with computer code ) , dialog data , and Wikipedia , tokenized into 2.49T BPE tokens with a 32k vocabulary using the SentencePiece library ( Kudo & Richardson , 2018 ) . Around 10 % of the pretraining data was non-English . Note that LaMDA-PT only has language model pretraining ( c.f . LaMDA , which was finetuned for dialog ) . Instruction tuning procedure . FLAN is the instruction-tuned version of LaMDA-PT . Our instruction tuning pipeline mixes all datasets and randomly samples from each dataset . To balance the different sizes of datasets , we limit the number of training examples per dataset to 30k and follow the examples-proportional mixing scheme ( Raffel et al. , 2020 ) with a mixing rate maximum of 3k.2 We finetune all models for 30k gradient steps with a batch size of 8,192 tokens using the Adafactor Optimizer ( Shazeer & Stern , 2018 ) with a learning rate of 3e-5 . The input and target sequence lengths used in finetuning are 1024 and 256 , respectively . We use packing ( Raffel et al. , 2020 ) to combine multiple training examples into a single sequence , separating inputs from targets using a special EOS token . This instruction tuning takes around 60 hours on a TPUv3 with 128 cores . For all evaluations , we report results on the final checkpoint trained for 30k steps . 3 RESULTS . We evaluate FLAN on natural language inference , reading comprehension , closed-book QA , translation , commonsense reasoning , coreference resolution , and struct-to-text . As described in §2.2 , we evaluate on unseen tasks by grouping datasets into task clusters and holding out each cluster for evaluation while instruction tuning on all remaining clusters ( i.e. , each evaluation task cluster uses a different checkpoint ) . For each dataset , we evaluate the mean of performance on all templates , which proxies the expected performance given a typical natural language instruction . As a dev set is sometimes available for manual prompt engineering ( Brown et al. , 2020 ) , for each dataset we also obtain the test set performance using the template with the best dev set performance . For comparison , we report zero and few-shot results for LaMDA-PT using the same prompts as GPT-3 ( as LaMDA-PT is not suitable for natural instructions without instruction tuning ) . This baseline provides the most direct ablation of how much instruction tuning helps . Instruction tuning significantly improves LaMDA-PT on most datasets . We also show the zero-shot performances of GPT-3 175B ( Brown et al. , 2020 ) and GLaM 64B/64E ( Du et al. , 2021 ) , as reported in their respective papers . With the best dev template , zero-shot FLAN outperforms zero-shot GPT-3 on 20 of 25 datasets and even surpasses GPT-3 ’ s few-shot performance on 10 datasets . With the best dev-template , zero-shot FLAN outperforms zero-shot GLaM on 13 of 19 available datasets and one-shot GLaM on 11 of 19 datasets . 2In this mixing scheme , a mixing rate maximum of 3,000 means that a dataset does not receive additional sampling weight for examples in excess of 3,000 . Overall , we observe that instruction tuning is very effective on tasks naturally verbalized as instructions ( e.g. , NLI , QA , translation , struct-to-text ) and is less effective on tasks directly formulated as language modeling , where instructions would be largely redundant ( e.g. , commonsense reasoning and coreference resolution tasks that are formatted as finishing an incomplete sentence or paragraph ) . Results on natural language inference , reading comprehension , closed-book QA , and translation are summarized in Figure 5 and described below . Natural language inference ( NLI ) . On five NLI datasets , where a model must determine whether a hypothesis is true given some premise , FLAN outperforms all baselines by a large margin . As noted by Brown et al . ( 2020 ) , perhaps one reason why GPT-3 struggles with NLI is that NLI examples are unlikely to have appeared naturally in an unsupervised training set and are thus awkwardly phrased as a continuation of a sentence . For FLAN , we phrase NLI as the more natural question “ Does < premise > mean that < hypothesis > ? ” , achieving much higher performance . Reading comprehension . On reading comprehension , where models are asked to answer a question about a provided passage , FLAN outperforms baselines for MultiRC ( Khashabi et al. , 2018 ) and OBQA ( Mihaylov et al. , 2018 ) . On BoolQ ( Clark et al. , 2019a ) , FLAN outperforms GPT-3 by a large margin , though LaMDA-PT already achieves high performance on BoolQ . Closed-book QA . For closed-book QA , which asks models to answer questions about the world without access to specific information containing the answer , FLAN outperforms GPT-3 on all four datasets . Compared to GLaM , FLAN has better performance on ARC-e and ARC-c ( Clark et al. , 2018 ) , and slightly lower performance on NQ ( Lee et al. , 2019 ; Kwiatkowski et al. , 2019 ) and TQA ( Joshi et al. , 2017 ) . Translation . Similar to GPT-3 , the training data for LaMDA-PT is around 90 % English and includes some text in other languages that was not specifically used to train the model to perform machine translation . We also evaluate FLAN ’ s performance on machine translation for the three datasets evaluated in the GPT-3 paper : French–English from WMT ’ 14 ( Bojar et al. , 2014 ) , and German– English and Romanian–English from WMT ’ 16 ( Bojar et al. , 2016 ) . Compared with GPT-3 , FLAN outperforms zero-shot GPT-3 for all six evaluations , though it underperforms few-shot GPT-3 in most cases . Similar to GPT-3 , FLAN shows strong results for translating into English and compares favorably against supervised translation baselines . Translating from English into other languages , however , was relatively weaker , as might be expected given that FLAN uses an English sentencepiece tokenizer and that the majority of pretraining data is English . Additional tasks . Although we see strong results for the above task clusters , one limitation with instruction tuning is that it does not improve performance for many language modeling tasks ( e.g. , commonsense reasoning or coreference resolution tasks formulated as sentence completions ) . For seven commonsense reasoning and coreference resolution tasks ( see Table 2 in the Appendix ) , FLAN only outperforms LaMDA-PT on three of the seven tasks . This negative result indicates that when the downstream task is the same as the original language modeling pre-training objective ( i.e. , in cases where instructions are largely redundant ) , instruction tuning is not useful . Finally , we report results for sentiment analysis , paraphrase detection , and struct-to-text , as well as additional datasets for which GPT-3 results are not available , in Table 2 and Table 1 in the Appendix . Generally , zero-shot FLAN outperforms zero-shot LaMDA-PT and is comparable with or better than few-shot LaMDA-PT . | The paper proposes a simple method, "instruction-tuning", to improve the zero-shot learning capability of large language model, which 1) annotates prompts for a wide range of tasks and then 2) fine-tunes the model to "answer/respond to" those prompt. The empirical results are impressive: after instruction-tuning, the 0-shot performance is better than GPT-3 (0-shot, sometimes few-shot) on a wide range of datasets; nevertheless, on datasets with formats already similar to language modeling, the performance gain is negligible or even negative. The paper also made a few other observations 1) performance benefits from the number of task clusters 2) instruction-tuning is only beneficial when the model size is larger enough, and 3) few-shot learning still helps. | SP:60232b35685b12a1aa583e9b2ef650eafb7bfcc0 |
Inverse Contextual Bandits: Learning How Behavior Evolves over Time | 1 INTRODUCTION . Modeling decision-making policies is a central concern in computational and behavioral science , with key applications in healthcare [ 1 ] , economics [ 2 ] , and cognition [ 3 ] . The business of policy learning is to determine an agent ’ s decision-making policy given observations of their behavior . Typically , the objective is either to replicate the behavior of some demonstrator ( cf . “ imitation learning ” ) [ 4 , 5 ] , or to match their performance on the basis of some reward function ( cf . “ apprenticeship learning ” ) [ 6 , 7 ] . However , equally important is the pursuit of “ descriptive modeling ” [ 8 , 9 ] —that is , of learning interpretable parameterizations of behavior for auditing , quantifying , and understanding decision-making policies . For instance , recent work has studied representing behaviors in terms of rules [ 10 ] , goals [ 11 ] , intentions [ 12 ] , preferences [ 13 ] , subjective dynamics [ 14 ] , as well as counterfactual outcomes [ 15 ] . Evolving Behaviors In this work , we ask a novel descriptive question : How has the observed behavior changed over time ? While conventional approaches to policy learning almost invariably assume that decision-making agents are stationary , this is rarely the case in practice : In many settings , behaviors evolve constantly as decision-makers learn more about their environment and adjust their policies accordingly . In fact , disseminating new knowledge from medical research into actual clinical practice is itself a major endeavor in healthcare [ 16–18 ] . This research question is new : While capturing “ variation in practice ” in observed data has been studied in the context of demonstrations containing mixed policies [ 19 , 20 ] , multiple tasks [ 21 , 22 ] , and subgroup-/patient-specific preferences [ 8 ] , little work has attempted to capture variation in practice over time as an agent ’ s knowledge of the world evolves . Example ( Organ Allocations ) As our core application , consider organ allocation practices for liver transplantations : In the medical community , our understanding of organ transplantations has changed numerous times over past decades [ 23–25 ] . Thus an immediate question lies in how actual organ allocation practices have changed over the years . Having a data-driven , quantitative , and— importantly—interpretable description of how practices have evolved is crucial : It would enable policy-makers to evaluate if the policies they introduced have had the intended impact on practice ; this would in turn play a substantial role in designing better policies going forward [ 8 ] ( see Figure 1 ) . To tackle questions of this form , we desire a policy learning method that satisfies three key desiderata : • It should provide an ( i . ) interpretable description of observed behavior ( see Appendix C ) . Suppose that policy-makers introduce a new policy to promote Features A and B as main factors of consideration in decision-making ( whereas clinicians rely on Feature C at the time ) , and the policy succeeds at establishing Feature A as such but fails to do so for Feature B . Using ICB , we can infer the time-varying importances of these features and observe quantitatively that importance of Feature B stayed the same despite the intentions of the new policy . Having made this observation , the policymakers can now update their policy accordingly . • It should be able to capture an agent ’ s ( ii . ) nonstationary knowledge of the world . • It should operate in an ( iii . ) offline manner— since online experimentation is impossible in highstakes environments such as healthcare . Inverse Contextual Bandits To accomplish this , we first identify the organ allocation problem as a contextual bandits problem [ 26–28 ] : Given each arriving instance of patient and organ features ( i.e . the context ) , an agent makes an allocation decision ( i.e . the action ) , whence some measure of feedback is perceived ( i.e . internal reward ) . Crucially , the environment ( i.e . precisely , its reward dynamics ) is unknown to the agent and must be actively learned , so the agent maintains beliefs about their environment ( i.e . internal knowledge ) . Not only must an agent select actions that “ exploit ” the knowledge they have , but they must also select actions that “ explore ” the environment to update their knowledge . Thus the behavior of a learning agent is naturally modeled as a generalized bandit strategy—that is , how to take actions based on their knowledge , and how to update their knowledge based on the outcomes of their actions . Now , the forward contextual bandits problem asks the ( normative ) question : Given an unknown environment , what is an effective bandit strategy that minimizes some notion of regret ? By contrast , our focus here is instead on the opposite direction—that is , in formalizing and solving the problem of Inverse Contextual Bandits ( “ ICB ” ) : We ask the ( descriptive ) question : Given demonstrated behavior from a decision-making agent , how has the agent ’ s knowledge been evolving over time ? Precisely , we wish to learn interpretable representations of generalized bandit strategies from the observable context-action pairs generated by those strategies—regardless of whether those strategies are effective . Contributions Our contributions are three-fold . In the sequel , we first formalize the ICB problem , identifying it with the data-driven objective of inferring an agent ’ s internal reward function along with their internal trajectory of beliefs about the environment ( Section 2 ) . Second , we propose two concrete learning algorithms , imposing different specifications regarding the agent ’ s behavioral strategy : The first parameterizes the agent ’ s knowledge in terms of Bayesian updates , whereas the second makes the milder specification that the agent ’ s behavior evolves smoothly over time ( Section 3 ) . Third , through both simulated and real-world data for liver transplantations , we illustrate how ICB can be applied as an investigative device for recovering and explaining the evolution of organ allocation practices over the years , as well as benchmarking and validating the accuracy of our algorithms ( Section 5 ) . 2 INVERSE CONTEXTUAL BANDITS . Preliminaries Consider a Markov decision process of the form D : = ( X , A , R , T ) , where S indicates the state space , A the action space , R ∈ ∆ ( R ) X×A the reward dynamics , and T ∈ ∆ ( X ) X×A the transition dynamics . At each time t ∈ Z+ , the decision-making agent is presented with some state xt ∈ X and decides to take an action at ∈ A , whence an immediate reward rt ∼ R ( xt , at ) is received , and a subsequent state st+1 ∼ T ( xt , at ) is presented . Let the decision-making policy employed by the agent be denoted π ∈ ∆ ( A ) X , such that actions at any time are sampled according to at ∼ π ( xt ) . In the forward direction , given dynamicsR , T , the reinforcement learning problem ( “ RL ” ) deals with determining the optimal policy that maximizes some notion of expected cumulative rewards [ 29 ] : π∗R , T : = argmaxπ∈∆ ( A ) X Eπ , R , T ∑ t rt . In the opposite direction , given some observed behavior policy πb and transition dynamics T , the inverse reinforcement learning problem ( “ IRL ” ) deals with determining the reward dynamics R with respect to which πb appears optimal . For instance , the classic max-margin approach seeks [ 30 ] : R∗ : = argminR ( maxπ Eπ , R , T ∑ t rt − Eπb , R , T ∑ t rt ) . Having Dynamics Access to Having No Dynamics Access In conventional RL ( and IRL ) , the decision-maker has unrestricted access to environment dynamics—either explicitly ( i.e . R , T are simply known and used in computing the optimal policy ) , or implicitly ( i.e . the agent may interact freely with the environment during training ) . In contrast , in our setting the agent has no such luxuries—not only are the dynamics not known to the agent , but neither do they enjoy a distinct sandboxed training phase prior to live deployment . Without such access , the agent must consider both the information they may gain when taking actions ( cf . “ exploration ” ) and also the expected rewards due to those actions ( cf . “ exploitation ” ) . Note that this property results in much more difficult learning problems—in both the forward and inverse directions ( see Appendix B for a more detailed discussion ) . Consider environments parameterized by environment parameters ρ ∈ P , such that the reward dynamics of an environment are given byRρ , and the transition dynamics by Tρ . Let ρenv denote the true environment parameter , such that the actual rewards received by an agent are distributed as rt ∼ Rρenv ( xt , at ) , and the actual states encountered by the agent are distributed as xt+1 ∼ Tρenv ( xt , at ) . Since ρenv is unknown , an agent takes actions on the basis of their beliefs about it ; these beliefs are described by probability distributions Pβ ∈ ∆ ( P ) parameterized by belief parameters β ∈ B . For each time t , let βt capture the agent ’ s belief at the beginning of that step . Each time step , the agent first samples an environment parameter ρt ∼ Pβt according to their belief βt , then takes an action according to π∗Rρt , Tρt . Note that this ensures that each action is takenwith probability proportional to the probability with which the agent believes it to be optimal at time step t. Essentially , the agent ’ s policy at time step t is induced by their belief βt : πβt ( x ) [ a ] : = Eρ∼Pβt [ π ∗ Rρ , Tρ ( x ) [ a ] ] . After receiving a reward rt , the agent updates their current belief parameter according to a ( possibly-stochastic ) belief-update function f ∈ ∆ ( B ) B×X×A×R , such that βt+1 ∼ f ( βt , xt , at , rt ) . Together with the initial belief parameter β1 , the agent ’ s t-th step belief is a ( possibly-stochastic ) function of the history ht = { x1 : t−1 , a1 : t−1 , r1 : t−1 } defined recursively via f . 2.1 CONTEXTUAL BANDITS SETTING . In this work , we consider state transitions that occur independently of past states and actions . Due to this property , decisions can be made greedily without consideration of what future states may be , and yields a contextual bandits problem [ 26–28 ] . Note that this captures the organ allocation problem well : Distributions of newly arriving organs are largely independent of prior allocation decisions ; in fact , transplantation and allocation policies are often modeled in bandit-like settings [ 31 , 32 ] . Formally : Definition 1 ( Contextual Bandits ) Consider a decision-making problem D : = ( X , A , R , T ) , where R , T are unknown to the agent . Let T ( x , a ) = T ′ for some T ′ ∈ ∆ ( X ) , for all x ∈ X , a ∈ A , such that policies are greedy : supp ( π∗t ( xt ) ) = argmaxat∈A R̄ρt ( xt , at ) , ( 1 ) where R̄ρ ( x , a ) : = Er∼Rρ ( x , a ) [ r ] indicates the mean reward function , and ties are broken arbitrarily . Given a space of environment parameterizations P , the contextual bandits problem is to design a space of belief parameterizations B , and to determine the optimal belief-update function f∗ : = argmaxf∈∆ ( B ) B×X×A×R ∑ tEf , πβt , R , T [ rt ] . ( 2 ) Now , suppose that an agent follows a bandit-type policy for T time steps ; this would generate an observational dataset of contexts and actions D : = { x1 : T , a1 : T } ( adhering to the contextual bandits literature , we refer to states as “ contexts ” hereafter ) .1 But since rewards rt and beliefs βt are quantities internal to the decision-maker , we assume that they are not part of the observational dataset : The former represents the agent ’ s preferences over outcomes after observing contexts and taking actions , which is not explicitly observable ; likewise , the latter represents the agent ’ s beliefs about what kinds of outcomes their actions result in , which is also not explicitly observable . Thus we ask the novel question : From D , can we infer the true environment parameter ρenv , as well as the belief trajectory β1 : T ? β1 ρ1 x1 a1 r1 β2 ρ2 x2 a2 r2 β3 ρ∗ f f ··· ··· Figure 2 . Graphical Model for ICB . We aim to infer the ( dotted ) red quantities given the ( solid ) blue ones . Definition 2 ( Inverse Contextual Bandits ) Consider again a contextual bandit problem D : = ( X , A , R , T ) , and recall that dynamics R , T are unknown to the agent . Given an observational dataset D , and a family of environment parameterizations P and belief parameterizations B , the inverse contextual bandits problem ( “ ICB ” ) is to determine the true environment parameter ρenv and the belief parameters β1 : T ( see Figure 2 ) . 1We also use “ x∈X ” to denote contexts ( standard in contextual bandits ) instead of “ s∈S ” ( common in RL ) . It is important to observe that the novelty here is general—that is , in posing the “ inverse ” question of how knowledge evolves over time . Focusing our attention on contextual bandits simply represents a specific choice of problem setting : In this first work on tackling the inverse problem in modeling nonstationary agents , it presents a more tractable problem for analysis , and is moreover especially suited for our motivating application in organ allocations . Note that in the bandit setting , the fact that transition dynamics are unknown to the agent is ultimately inconsequential due to greedy policies ; we refer to ρ simply as “ reward parameter ” hereafter . We leave generalization to arbitrary transition dynamics T ∈ ∆ ( X ) X×A to future work . Also note that ICB itself is not a bandit problem ( see Appendix C ) . Remark 1 ( Environment vs. State Beliefs ) When speaking of inferring “ beliefs ” here in our setting , we are referring to environment beliefs—that is , an agent ’ s knowledge of the environment ’ s rewards . It is important to distinguish this from state beliefs in partially-observed environments [ 33 , 34 ] —that is , an agent ’ s estimate of which latent state they are in . State beliefs are computed by an agent using their ( known ) environment parameters , whereas environment beliefs concern the environment ’ s ( unknown ) parameters themselves . State beliefs can be easily inferred [ 35 ] : all factors that contribute to state-belief updates ( i.e . observations and actions ) are observable ; on the other hand , environment beliefs are more technically challenging due to latent factors ( i.e . internal rewards r1 : T ) that are never observable . Because of the same reason , tackling ICB with conventional IO-HMM inference methods is not possible either , even though the agent can be viewed as an IO-HMM ( see Appendix C ) . Remark 2 ( Subjective vs . Objective Reward ) When speaking of learning what “ rewards ” an agent is optimizing , the novelty here that our objective refers to the full belief trajectory β1 : T of the agent . It is important to distinguish this from simply learning the ground-truth parameter ρenv as in typical IRL . The ground-truth parameter is an objective quantity capturing the “ prescriptive ” notion of what the agent ought to be optimizing , whereas the belief trajectory is a subjective quantity capturing the “ descriptive ” notion of what the agent appears to be optimizing . Existing work in learning from nonstationary agents has focused solely on ρenv , which is all that is required for apprenticeship [ 36 , 37 ] . In contrast , for the goal of understanding behavior ( and how it evolves ) , Definition 2 brings β1 : T to focus . | This paper studies an inverse (linear) contextual bandits (ICB) problem, where, given a $T$-round realization of a bandit policy’s actions and observed rewards, the goal is to design an algorithm to estimate the underlying environment parameter, along with the “belief trajectory” of the bandit policy. A particular emphasis is placed on the belief trajectory being “interpretable” and capturing changes in the policy’s “knowledge of the world” over time. The paper’s main contributions are (i) formalizing the inverse contextual bandits problem, (ii) designing two algorithms for this problem based on two different ways of modelling beliefs of the bandit policy, and (iii) providing empirical illustrations of how their algorithm can be used to investigate and explain changes in medical decision-making over time | SP:4304fa522634ec30bcf3862d4ae463baeed51711 |
Inverse Contextual Bandits: Learning How Behavior Evolves over Time | 1 INTRODUCTION . Modeling decision-making policies is a central concern in computational and behavioral science , with key applications in healthcare [ 1 ] , economics [ 2 ] , and cognition [ 3 ] . The business of policy learning is to determine an agent ’ s decision-making policy given observations of their behavior . Typically , the objective is either to replicate the behavior of some demonstrator ( cf . “ imitation learning ” ) [ 4 , 5 ] , or to match their performance on the basis of some reward function ( cf . “ apprenticeship learning ” ) [ 6 , 7 ] . However , equally important is the pursuit of “ descriptive modeling ” [ 8 , 9 ] —that is , of learning interpretable parameterizations of behavior for auditing , quantifying , and understanding decision-making policies . For instance , recent work has studied representing behaviors in terms of rules [ 10 ] , goals [ 11 ] , intentions [ 12 ] , preferences [ 13 ] , subjective dynamics [ 14 ] , as well as counterfactual outcomes [ 15 ] . Evolving Behaviors In this work , we ask a novel descriptive question : How has the observed behavior changed over time ? While conventional approaches to policy learning almost invariably assume that decision-making agents are stationary , this is rarely the case in practice : In many settings , behaviors evolve constantly as decision-makers learn more about their environment and adjust their policies accordingly . In fact , disseminating new knowledge from medical research into actual clinical practice is itself a major endeavor in healthcare [ 16–18 ] . This research question is new : While capturing “ variation in practice ” in observed data has been studied in the context of demonstrations containing mixed policies [ 19 , 20 ] , multiple tasks [ 21 , 22 ] , and subgroup-/patient-specific preferences [ 8 ] , little work has attempted to capture variation in practice over time as an agent ’ s knowledge of the world evolves . Example ( Organ Allocations ) As our core application , consider organ allocation practices for liver transplantations : In the medical community , our understanding of organ transplantations has changed numerous times over past decades [ 23–25 ] . Thus an immediate question lies in how actual organ allocation practices have changed over the years . Having a data-driven , quantitative , and— importantly—interpretable description of how practices have evolved is crucial : It would enable policy-makers to evaluate if the policies they introduced have had the intended impact on practice ; this would in turn play a substantial role in designing better policies going forward [ 8 ] ( see Figure 1 ) . To tackle questions of this form , we desire a policy learning method that satisfies three key desiderata : • It should provide an ( i . ) interpretable description of observed behavior ( see Appendix C ) . Suppose that policy-makers introduce a new policy to promote Features A and B as main factors of consideration in decision-making ( whereas clinicians rely on Feature C at the time ) , and the policy succeeds at establishing Feature A as such but fails to do so for Feature B . Using ICB , we can infer the time-varying importances of these features and observe quantitatively that importance of Feature B stayed the same despite the intentions of the new policy . Having made this observation , the policymakers can now update their policy accordingly . • It should be able to capture an agent ’ s ( ii . ) nonstationary knowledge of the world . • It should operate in an ( iii . ) offline manner— since online experimentation is impossible in highstakes environments such as healthcare . Inverse Contextual Bandits To accomplish this , we first identify the organ allocation problem as a contextual bandits problem [ 26–28 ] : Given each arriving instance of patient and organ features ( i.e . the context ) , an agent makes an allocation decision ( i.e . the action ) , whence some measure of feedback is perceived ( i.e . internal reward ) . Crucially , the environment ( i.e . precisely , its reward dynamics ) is unknown to the agent and must be actively learned , so the agent maintains beliefs about their environment ( i.e . internal knowledge ) . Not only must an agent select actions that “ exploit ” the knowledge they have , but they must also select actions that “ explore ” the environment to update their knowledge . Thus the behavior of a learning agent is naturally modeled as a generalized bandit strategy—that is , how to take actions based on their knowledge , and how to update their knowledge based on the outcomes of their actions . Now , the forward contextual bandits problem asks the ( normative ) question : Given an unknown environment , what is an effective bandit strategy that minimizes some notion of regret ? By contrast , our focus here is instead on the opposite direction—that is , in formalizing and solving the problem of Inverse Contextual Bandits ( “ ICB ” ) : We ask the ( descriptive ) question : Given demonstrated behavior from a decision-making agent , how has the agent ’ s knowledge been evolving over time ? Precisely , we wish to learn interpretable representations of generalized bandit strategies from the observable context-action pairs generated by those strategies—regardless of whether those strategies are effective . Contributions Our contributions are three-fold . In the sequel , we first formalize the ICB problem , identifying it with the data-driven objective of inferring an agent ’ s internal reward function along with their internal trajectory of beliefs about the environment ( Section 2 ) . Second , we propose two concrete learning algorithms , imposing different specifications regarding the agent ’ s behavioral strategy : The first parameterizes the agent ’ s knowledge in terms of Bayesian updates , whereas the second makes the milder specification that the agent ’ s behavior evolves smoothly over time ( Section 3 ) . Third , through both simulated and real-world data for liver transplantations , we illustrate how ICB can be applied as an investigative device for recovering and explaining the evolution of organ allocation practices over the years , as well as benchmarking and validating the accuracy of our algorithms ( Section 5 ) . 2 INVERSE CONTEXTUAL BANDITS . Preliminaries Consider a Markov decision process of the form D : = ( X , A , R , T ) , where S indicates the state space , A the action space , R ∈ ∆ ( R ) X×A the reward dynamics , and T ∈ ∆ ( X ) X×A the transition dynamics . At each time t ∈ Z+ , the decision-making agent is presented with some state xt ∈ X and decides to take an action at ∈ A , whence an immediate reward rt ∼ R ( xt , at ) is received , and a subsequent state st+1 ∼ T ( xt , at ) is presented . Let the decision-making policy employed by the agent be denoted π ∈ ∆ ( A ) X , such that actions at any time are sampled according to at ∼ π ( xt ) . In the forward direction , given dynamicsR , T , the reinforcement learning problem ( “ RL ” ) deals with determining the optimal policy that maximizes some notion of expected cumulative rewards [ 29 ] : π∗R , T : = argmaxπ∈∆ ( A ) X Eπ , R , T ∑ t rt . In the opposite direction , given some observed behavior policy πb and transition dynamics T , the inverse reinforcement learning problem ( “ IRL ” ) deals with determining the reward dynamics R with respect to which πb appears optimal . For instance , the classic max-margin approach seeks [ 30 ] : R∗ : = argminR ( maxπ Eπ , R , T ∑ t rt − Eπb , R , T ∑ t rt ) . Having Dynamics Access to Having No Dynamics Access In conventional RL ( and IRL ) , the decision-maker has unrestricted access to environment dynamics—either explicitly ( i.e . R , T are simply known and used in computing the optimal policy ) , or implicitly ( i.e . the agent may interact freely with the environment during training ) . In contrast , in our setting the agent has no such luxuries—not only are the dynamics not known to the agent , but neither do they enjoy a distinct sandboxed training phase prior to live deployment . Without such access , the agent must consider both the information they may gain when taking actions ( cf . “ exploration ” ) and also the expected rewards due to those actions ( cf . “ exploitation ” ) . Note that this property results in much more difficult learning problems—in both the forward and inverse directions ( see Appendix B for a more detailed discussion ) . Consider environments parameterized by environment parameters ρ ∈ P , such that the reward dynamics of an environment are given byRρ , and the transition dynamics by Tρ . Let ρenv denote the true environment parameter , such that the actual rewards received by an agent are distributed as rt ∼ Rρenv ( xt , at ) , and the actual states encountered by the agent are distributed as xt+1 ∼ Tρenv ( xt , at ) . Since ρenv is unknown , an agent takes actions on the basis of their beliefs about it ; these beliefs are described by probability distributions Pβ ∈ ∆ ( P ) parameterized by belief parameters β ∈ B . For each time t , let βt capture the agent ’ s belief at the beginning of that step . Each time step , the agent first samples an environment parameter ρt ∼ Pβt according to their belief βt , then takes an action according to π∗Rρt , Tρt . Note that this ensures that each action is takenwith probability proportional to the probability with which the agent believes it to be optimal at time step t. Essentially , the agent ’ s policy at time step t is induced by their belief βt : πβt ( x ) [ a ] : = Eρ∼Pβt [ π ∗ Rρ , Tρ ( x ) [ a ] ] . After receiving a reward rt , the agent updates their current belief parameter according to a ( possibly-stochastic ) belief-update function f ∈ ∆ ( B ) B×X×A×R , such that βt+1 ∼ f ( βt , xt , at , rt ) . Together with the initial belief parameter β1 , the agent ’ s t-th step belief is a ( possibly-stochastic ) function of the history ht = { x1 : t−1 , a1 : t−1 , r1 : t−1 } defined recursively via f . 2.1 CONTEXTUAL BANDITS SETTING . In this work , we consider state transitions that occur independently of past states and actions . Due to this property , decisions can be made greedily without consideration of what future states may be , and yields a contextual bandits problem [ 26–28 ] . Note that this captures the organ allocation problem well : Distributions of newly arriving organs are largely independent of prior allocation decisions ; in fact , transplantation and allocation policies are often modeled in bandit-like settings [ 31 , 32 ] . Formally : Definition 1 ( Contextual Bandits ) Consider a decision-making problem D : = ( X , A , R , T ) , where R , T are unknown to the agent . Let T ( x , a ) = T ′ for some T ′ ∈ ∆ ( X ) , for all x ∈ X , a ∈ A , such that policies are greedy : supp ( π∗t ( xt ) ) = argmaxat∈A R̄ρt ( xt , at ) , ( 1 ) where R̄ρ ( x , a ) : = Er∼Rρ ( x , a ) [ r ] indicates the mean reward function , and ties are broken arbitrarily . Given a space of environment parameterizations P , the contextual bandits problem is to design a space of belief parameterizations B , and to determine the optimal belief-update function f∗ : = argmaxf∈∆ ( B ) B×X×A×R ∑ tEf , πβt , R , T [ rt ] . ( 2 ) Now , suppose that an agent follows a bandit-type policy for T time steps ; this would generate an observational dataset of contexts and actions D : = { x1 : T , a1 : T } ( adhering to the contextual bandits literature , we refer to states as “ contexts ” hereafter ) .1 But since rewards rt and beliefs βt are quantities internal to the decision-maker , we assume that they are not part of the observational dataset : The former represents the agent ’ s preferences over outcomes after observing contexts and taking actions , which is not explicitly observable ; likewise , the latter represents the agent ’ s beliefs about what kinds of outcomes their actions result in , which is also not explicitly observable . Thus we ask the novel question : From D , can we infer the true environment parameter ρenv , as well as the belief trajectory β1 : T ? β1 ρ1 x1 a1 r1 β2 ρ2 x2 a2 r2 β3 ρ∗ f f ··· ··· Figure 2 . Graphical Model for ICB . We aim to infer the ( dotted ) red quantities given the ( solid ) blue ones . Definition 2 ( Inverse Contextual Bandits ) Consider again a contextual bandit problem D : = ( X , A , R , T ) , and recall that dynamics R , T are unknown to the agent . Given an observational dataset D , and a family of environment parameterizations P and belief parameterizations B , the inverse contextual bandits problem ( “ ICB ” ) is to determine the true environment parameter ρenv and the belief parameters β1 : T ( see Figure 2 ) . 1We also use “ x∈X ” to denote contexts ( standard in contextual bandits ) instead of “ s∈S ” ( common in RL ) . It is important to observe that the novelty here is general—that is , in posing the “ inverse ” question of how knowledge evolves over time . Focusing our attention on contextual bandits simply represents a specific choice of problem setting : In this first work on tackling the inverse problem in modeling nonstationary agents , it presents a more tractable problem for analysis , and is moreover especially suited for our motivating application in organ allocations . Note that in the bandit setting , the fact that transition dynamics are unknown to the agent is ultimately inconsequential due to greedy policies ; we refer to ρ simply as “ reward parameter ” hereafter . We leave generalization to arbitrary transition dynamics T ∈ ∆ ( X ) X×A to future work . Also note that ICB itself is not a bandit problem ( see Appendix C ) . Remark 1 ( Environment vs. State Beliefs ) When speaking of inferring “ beliefs ” here in our setting , we are referring to environment beliefs—that is , an agent ’ s knowledge of the environment ’ s rewards . It is important to distinguish this from state beliefs in partially-observed environments [ 33 , 34 ] —that is , an agent ’ s estimate of which latent state they are in . State beliefs are computed by an agent using their ( known ) environment parameters , whereas environment beliefs concern the environment ’ s ( unknown ) parameters themselves . State beliefs can be easily inferred [ 35 ] : all factors that contribute to state-belief updates ( i.e . observations and actions ) are observable ; on the other hand , environment beliefs are more technically challenging due to latent factors ( i.e . internal rewards r1 : T ) that are never observable . Because of the same reason , tackling ICB with conventional IO-HMM inference methods is not possible either , even though the agent can be viewed as an IO-HMM ( see Appendix C ) . Remark 2 ( Subjective vs . Objective Reward ) When speaking of learning what “ rewards ” an agent is optimizing , the novelty here that our objective refers to the full belief trajectory β1 : T of the agent . It is important to distinguish this from simply learning the ground-truth parameter ρenv as in typical IRL . The ground-truth parameter is an objective quantity capturing the “ prescriptive ” notion of what the agent ought to be optimizing , whereas the belief trajectory is a subjective quantity capturing the “ descriptive ” notion of what the agent appears to be optimizing . Existing work in learning from nonstationary agents has focused solely on ρenv , which is all that is required for apprenticeship [ 36 , 37 ] . In contrast , for the goal of understanding behavior ( and how it evolves ) , Definition 2 brings β1 : T to focus . | This paper addresses the problem of Inverse Contextual Bandit. They raise an important question: given demonstrated behavior from an agent, how has the agent’s knowledge been evolving over time? Formally, Given a contextual bandit problem $(X, A, R, T )$, where $R, T$ are unknown to the agent. Given an observational dataset $D$, and a family of reward parameterizations $P$ and belief parameterizations $B$, the inverse contextual bandits problem is to determine the true environment parameter $\rho^*$ and the belief parameters $\beta_{1:T}$. They propose two algorithms to learn these parameters. The first uses the agent’s knowledge in terms of Bayesian update. The second uses the Gaussian process. They demonstrate their algorithm through simulated and real-world data for liver transplantations. | SP:4304fa522634ec30bcf3862d4ae463baeed51711 |
Auto-Encoding Inverse Reinforcement Learning | 1 INTRODUCTION . Reinforcement learning ( RL ) provides a powerful framework for automating decision making . However , RL still requires significantly engineered reward functions for good practical performance . To make RL more applicable in the real-world , it is important to learn a reward function from expert demonstrations . Imitation learning offers the instruments to learn policies directly from the data , without an explicit reward function . Imitation learning enables the agents to learn to solve tasks from expert demonstrations , such as helicopter control ( Abbeel et al. , 2006 ; 2007 ; Ng et al. , 2004 ; Coates et al. , 2008 ; Abbeel et al. , 2008a ; 2010 ) , robot navigation ( Ratliff et al. , 2006 ; Abbeel et al. , 2008b ; Ziebart et al. , 2008 ; 2010 ) , and building controls ( Barrett & Linder , 2015 ) . The goal of imitation learning is to extract the expert policies from the expert demonstrations without the access to the reward signal from the environment . The algorithms in this field can be divided into two broad categories : behavioral cloning ( BC ) and Inverse Reinforcement Learning ( IRL ) . BC formulates the learning task as a supervised learning problem , which learns a mapping from the states to the actions , using the expert trajectories . However , BC methods suffer from the problem of compounding errors , i.e. , covariate shift , which only learn the actions of the expert but not reason about what the expert attempts to achieve . In the contrast , IRL directly infers the underlying reward function from the data , and then teaches the RL agent to learn a policy to achieve the highest accumulated reward . Empirical results show that IRL methods are more efficient than BC methods in multi-step decision making tasks . However , how to make IRL stable and efficient to use is still subject to research . Current IRL methods , such as adversarial imitation learning approaches , model the reward function as a discriminator to learn the mapping from the state-action pair to a scalar value , i.e. , the reward ( Ho & Ermon , 2016 ; Fu et al. , 2017 ; Ghasemipour et al. , 2020 ) . However , this formulation may be easily over-confident to focus on the minor differences between the state-action features of the expert and generated samples . To the best of our knowledge , this is the first instance of formulating the reward function as an auto-encoder , which has the ability to learn full scale differences between expert and alternative policies . The derived formulation ensures the reward signal is informative , which is efficient for optimization . Additionally , the encoding-decoding process empowers the denoising capability and makes the agent robust to the noisy expert , in more realistic settings . In the experiments , we show that the proposed method AEIRL achieves the best overall performance on both clean and noisy expert demonstrations . Our contributions are three-fold : • We propose the Auto-Encoding Inverse Reinforcement Learning ( AEIRL ) architecture , which models the reward function as a surrogate function using the reconstruction error of an auto-encoder . The reconstruction error provides more informative learning signals , compared to the binary logistic loss . • To show the contributing factors of our method , we conduct ablation studies based on different distribution divergences and alternative auto-encoders . Experiments show that they achieve comparable results which indicates the encoding-decoding process is the major contributing factor for our method . • The experimental results on the MoJoCo tasks show that our method outperforms state-ofthe-art imitation learning methods on both clean and noisy expert demonstrations . Empirical analysis show that our learned reward function can be more informative and robust . Furthermore , the learning processes of our methods are also more stable in general . 2 RELATED WORK . 2.1 INVERSE REINFORCEMENT LEARNING . Adversarial imitation learning such as GAIL , AIRL , DAC , f -GAIL , EAIRL , FAIRL ( Ho & Ermon , 2016 ; Fu et al. , 2017 ; Kostrikov et al. , 2019 ; Zhang et al. , 2020 ; Qureshi et al. , 2019 ; Ghasemipour et al. , 2020 ) formulates the learned reward function as a discriminator that learns to differentiate expert transitions from non-expert ones . Among these methods , GAIL ( Ho & Ermon , 2016 ) considers the Jensen-Shannon divergence , while AIRL ( Fu et al. , 2017 ) considers the Kullback-Leibler ( KL ) divergence . DAC ( Kostrikov et al. , 2019 ) extends GAIL to the off-policy setting and significantly improves the sample-efficiency of adversarial imitation learning . Furthermore , f -divergence is utilized in f -GAIL ( Zhang et al. , 2020 ) , which is considered more sample-efficient . Recently , FAIRL utilizes the forward KL divergence ( Ghasemipour et al. , 2020 ) and achieves better performance than AIRL ( Fu et al. , 2017 ) comprehensively , but it is still not robust enough . However , these methods rely heavily on a carefully tuned discriminator , which might easily overfit to the minor differences between the expert and the generated samples . In comparison , our auto-encoder based reward function helps to learn the full scale differences between the expert and generated samples , which provides more informative reward signals . The robustness of adversarial imitation learning is also questionable with imperfection in observations ( Stadie et al. , 2017 ; Berseth & Pal , 2020 ) , actions , transition models ( Gangwani & Peng , 2020 ; Christiano et al. , 2016 ) , expert demonstrations ( Brown et al. , 2019 ; Shiarlis et al. , 2016 ; Jing et al. , 2020 ) and their combinations ( Kim et al. , 2020 ) . Previous robust IRL methods require the demonstrations to be annotated with confidence scores ( Wu et al. , 2019 ; Brown et al. , 2019 ; Grollman & Billard , 2012 ) , when the expert data is noisy . However , these annotations are rather expensive . Compared to this , our auto-encoder based reward function helps to denoise the expert data through the encoding-decoding process . Our method AEIRL is relatively succinct and robust to noisy expert demonstrations and does not require any annotated data . Another category of IRL uses an offline similarity function to estimate the rewards ( Boularias et al. , 2011 ; Klein et al. , 2013 ; Piot et al. , 2016 ) . The idea of these methods is still inducing the expert policy by minimizing the distance between the state action distributions of the expert and sampled trajectories . To the best of our knowledge , the most powerful method in this category is Primal Wasserstein Imitation Learning ( PWIL ) ( Dadashi et al. , 2021 ) , which utilizes the upper bound of its primal form as the optimization objective . The advantage of these methods is that they are relatively more robust compared to adversarial imitation learning methods , when the expert data is noisy . However , the performance of these methods heavily depends on the similarity measurement , and therefore , it varies greatly on different tasks . Compared to PWIL , our method achieves superior performance . 2.2 AUTO-ENCODING BASED GANS . Auto-encoders have been successfully applied to improve the training stability and modes capturing in GANs . Auto-encoding based GANs can be classified into three categories : ( 1 ) utilizing an auto-encoder as the discriminator such as energy-based GANs ( Zhao et al. , 2016 ) and boundaryequilibrium GANs ( Berthelot et al. , 2017 ) ; ( 2 ) using a denoising auto-encoder to derive an auxiliary loss for the generator ( Warde-Farley & Bengio , 2017 ) ; ( 3 ) combining variational auto-encoder and GANs to generate both vivid and diverse samples by balancing the objective of reconstructing the training data and confusing the discriminator ( Larsen et al. , 2016 ) . Our method AEIRL takes inspirations from EBGAN ( Zhao et al. , 2016 ) and utilizes an auto-encoder as the reward function and derives the efficient reconstruction error based surrogate reward signal and its corresponding objective functions . 3 BACKGROUND . A Markov decision process ( MDP ) is a tuple ( S , A , T , γ , P , r ) . In this tuple , S is a state space ; A is an action space ; T is a probability matrix for state transitions ; γ ∈ ( 0 , 1 ] is a discount factor ; P is a initial-state transition distribution ; and r : S × A → R is a reward function . Additionally , we also define a stochastic policy π , which is a mapping from states to probability distributions over actions . IRL infers the reward function using the expert demonstrations , which are assumed to be the observations of optimal behaviors ( Ng & Russell , 2000 ) . In general , IRL is formulated as a bi-level optimization process meaning that iteratively training the reward function and optimizing the policy . Assume that we are given an expert policy πE , IRL ( Ziebart et al. , 2008 ; 2010 ) fits a reward function from a family of functionsR with the optimization problem : min r∈R ( max π∈Π Eπ [ r ( s , a ) ] ) −EπE [ r ( s , a ) ] ( 1 ) Moreover , the expert policy πE will only be provided by a set of expert demonstrations , so the expected reward of πE is estimated by these trajectories . IRL looks for a reward function r ( s , a ) that assigns high values to the expert policy and low values to other policies . Therefore , it allows the expert policy to be found with a certain reinforcement learning procedure : RL ( r ) = argmax π∈Π Eπ [ r ( s , a ) ] ( 2 ) The RL process will induce the expert policy via maximizing the expected cumulative rewards . Meanwhile the entropy term can be optionally added to the reinforcement learning objective to encourage the exploration in policy searching . Typically , IRL models the reward function as a discriminator which can be easily overfit to the expert data . Also its training stability is questionable once confronted with even a little noise in the expert data . Therefore , we propose auto-encoding IRL to utilize an auto-encoder based reward function which achieve strong performance both on clean and noisy expert demonstrations . 4 AUTO-ENCODING INVERSE REINFORCEMENT LEARNING . 4.1 OVERVIEW . The reward function in GAIL is a discriminator , which attempts to assign high values to the regions near the expert demonstrations and assign low values to the other regions . However , this form of reward function could be easily overfitting to the expert data . Consider the CartPole balancing task , the state of the environment is a feature consisting of position , angle , angle ’ s rate , and cart velocity , while the action is moving left or right . Here , we assume that all the expert states ’ velocity is 2 for example . When the generated states ’ velocity is 1.9 and other dimensions of state action pairs are the same as the expert ’ s , the discriminator of GAIL would still give a low reward on these generated state action pairs . However it may actually perform very well on the goal of mimicking the expert ’ s behaviors . In other words , the reward function in GAIL on this example could easily overfit to the minor differences between the expert and the sampled data , while missing the underlying goal of the expert . In our paper , we propose an auto-encoder based reward function for inverse reinforcement learning . It utilizes the reconstruction error of the state action pairs to yield an informative reward signal . The reconstruction error based reward signal significantly retains the information of state action pairs , rather than focusing on the minor differences . Such a reward signal wouldn ’ t lead to overconfidence in distinguishing the expert and the generated samples . Recall the CartPole balancing example , the mean square error between states ’ velocity 1.9 and 2 is very small . And it could still feed a good reward to the agent under this situation . Thus , the reconstruction error based reward signal focuses on the full scale differences between the expert and generated state action pairs rather than the minor parts . This yields a much more informative reward signal . Figure 5 ( See in Section 5.4 ) , shows a more informative reward signal recovered with our method . Furthermore , the training process of the policy becomes smoother in general , which is shown in Figure 11 ( See in Appendix A.6 ) . The expert demonstrations usually contains noise bacause we sample these human trajectories with sensors and other devices in the real world . Therefore , adversarial imitation learning , such as GAIL , might be easily affected to learn the noisy expert behaviors , which are not the real intentions of the expert . The reward function under this situation could overfit to the noisy features . Recall the CartPole balancing example , we assume that the expert states ’ velocity is 2 but it tends to be 2 + δ due to the noisy sampling process . When the learned policy is good enough as the sampled states ’ velocity , the discriminator in GAIL still takes it as a bad policy , and therefore , it loses its efficacy , when learning from noisy demonstrations . Figure 2 depicts the denoising process of an autoencoder . When the auto-encoder is trained to minimize the averaged squared errors , the vector points approximately towards the nearest points on the manifold , since the auto-encoder estimates the cen- ter of mass of the clean points ( Goodfellow et al. , 2016 ) . Therefore , the reconstruction error of the auto-encoder can eliminate some effects when learning from noisy expert demonstrations and make the reward signal more robust . Recall the CartPole example , the reconstruction error of the auto-encoder is δ2 which doesn ’ t play an important role in the rewards for the agent . | This paper proposes to use auto-encoder for inverse reinforcement learning. The main goal of using the auto-encoder, is to use it as the reward function, which takes the auto-encoder reconstruction error to provide reward signals for the agent. The authors claim that such approach provides more informative signal than existing works, especially adversarial imitation learning approaches. Their experiments demonstrate that this method has a better performance over other baselines, and more robust to demonstration noise. | SP:189665e20164d1d5e70e4185afca18b2d0f8bdb9 |
Auto-Encoding Inverse Reinforcement Learning | 1 INTRODUCTION . Reinforcement learning ( RL ) provides a powerful framework for automating decision making . However , RL still requires significantly engineered reward functions for good practical performance . To make RL more applicable in the real-world , it is important to learn a reward function from expert demonstrations . Imitation learning offers the instruments to learn policies directly from the data , without an explicit reward function . Imitation learning enables the agents to learn to solve tasks from expert demonstrations , such as helicopter control ( Abbeel et al. , 2006 ; 2007 ; Ng et al. , 2004 ; Coates et al. , 2008 ; Abbeel et al. , 2008a ; 2010 ) , robot navigation ( Ratliff et al. , 2006 ; Abbeel et al. , 2008b ; Ziebart et al. , 2008 ; 2010 ) , and building controls ( Barrett & Linder , 2015 ) . The goal of imitation learning is to extract the expert policies from the expert demonstrations without the access to the reward signal from the environment . The algorithms in this field can be divided into two broad categories : behavioral cloning ( BC ) and Inverse Reinforcement Learning ( IRL ) . BC formulates the learning task as a supervised learning problem , which learns a mapping from the states to the actions , using the expert trajectories . However , BC methods suffer from the problem of compounding errors , i.e. , covariate shift , which only learn the actions of the expert but not reason about what the expert attempts to achieve . In the contrast , IRL directly infers the underlying reward function from the data , and then teaches the RL agent to learn a policy to achieve the highest accumulated reward . Empirical results show that IRL methods are more efficient than BC methods in multi-step decision making tasks . However , how to make IRL stable and efficient to use is still subject to research . Current IRL methods , such as adversarial imitation learning approaches , model the reward function as a discriminator to learn the mapping from the state-action pair to a scalar value , i.e. , the reward ( Ho & Ermon , 2016 ; Fu et al. , 2017 ; Ghasemipour et al. , 2020 ) . However , this formulation may be easily over-confident to focus on the minor differences between the state-action features of the expert and generated samples . To the best of our knowledge , this is the first instance of formulating the reward function as an auto-encoder , which has the ability to learn full scale differences between expert and alternative policies . The derived formulation ensures the reward signal is informative , which is efficient for optimization . Additionally , the encoding-decoding process empowers the denoising capability and makes the agent robust to the noisy expert , in more realistic settings . In the experiments , we show that the proposed method AEIRL achieves the best overall performance on both clean and noisy expert demonstrations . Our contributions are three-fold : • We propose the Auto-Encoding Inverse Reinforcement Learning ( AEIRL ) architecture , which models the reward function as a surrogate function using the reconstruction error of an auto-encoder . The reconstruction error provides more informative learning signals , compared to the binary logistic loss . • To show the contributing factors of our method , we conduct ablation studies based on different distribution divergences and alternative auto-encoders . Experiments show that they achieve comparable results which indicates the encoding-decoding process is the major contributing factor for our method . • The experimental results on the MoJoCo tasks show that our method outperforms state-ofthe-art imitation learning methods on both clean and noisy expert demonstrations . Empirical analysis show that our learned reward function can be more informative and robust . Furthermore , the learning processes of our methods are also more stable in general . 2 RELATED WORK . 2.1 INVERSE REINFORCEMENT LEARNING . Adversarial imitation learning such as GAIL , AIRL , DAC , f -GAIL , EAIRL , FAIRL ( Ho & Ermon , 2016 ; Fu et al. , 2017 ; Kostrikov et al. , 2019 ; Zhang et al. , 2020 ; Qureshi et al. , 2019 ; Ghasemipour et al. , 2020 ) formulates the learned reward function as a discriminator that learns to differentiate expert transitions from non-expert ones . Among these methods , GAIL ( Ho & Ermon , 2016 ) considers the Jensen-Shannon divergence , while AIRL ( Fu et al. , 2017 ) considers the Kullback-Leibler ( KL ) divergence . DAC ( Kostrikov et al. , 2019 ) extends GAIL to the off-policy setting and significantly improves the sample-efficiency of adversarial imitation learning . Furthermore , f -divergence is utilized in f -GAIL ( Zhang et al. , 2020 ) , which is considered more sample-efficient . Recently , FAIRL utilizes the forward KL divergence ( Ghasemipour et al. , 2020 ) and achieves better performance than AIRL ( Fu et al. , 2017 ) comprehensively , but it is still not robust enough . However , these methods rely heavily on a carefully tuned discriminator , which might easily overfit to the minor differences between the expert and the generated samples . In comparison , our auto-encoder based reward function helps to learn the full scale differences between the expert and generated samples , which provides more informative reward signals . The robustness of adversarial imitation learning is also questionable with imperfection in observations ( Stadie et al. , 2017 ; Berseth & Pal , 2020 ) , actions , transition models ( Gangwani & Peng , 2020 ; Christiano et al. , 2016 ) , expert demonstrations ( Brown et al. , 2019 ; Shiarlis et al. , 2016 ; Jing et al. , 2020 ) and their combinations ( Kim et al. , 2020 ) . Previous robust IRL methods require the demonstrations to be annotated with confidence scores ( Wu et al. , 2019 ; Brown et al. , 2019 ; Grollman & Billard , 2012 ) , when the expert data is noisy . However , these annotations are rather expensive . Compared to this , our auto-encoder based reward function helps to denoise the expert data through the encoding-decoding process . Our method AEIRL is relatively succinct and robust to noisy expert demonstrations and does not require any annotated data . Another category of IRL uses an offline similarity function to estimate the rewards ( Boularias et al. , 2011 ; Klein et al. , 2013 ; Piot et al. , 2016 ) . The idea of these methods is still inducing the expert policy by minimizing the distance between the state action distributions of the expert and sampled trajectories . To the best of our knowledge , the most powerful method in this category is Primal Wasserstein Imitation Learning ( PWIL ) ( Dadashi et al. , 2021 ) , which utilizes the upper bound of its primal form as the optimization objective . The advantage of these methods is that they are relatively more robust compared to adversarial imitation learning methods , when the expert data is noisy . However , the performance of these methods heavily depends on the similarity measurement , and therefore , it varies greatly on different tasks . Compared to PWIL , our method achieves superior performance . 2.2 AUTO-ENCODING BASED GANS . Auto-encoders have been successfully applied to improve the training stability and modes capturing in GANs . Auto-encoding based GANs can be classified into three categories : ( 1 ) utilizing an auto-encoder as the discriminator such as energy-based GANs ( Zhao et al. , 2016 ) and boundaryequilibrium GANs ( Berthelot et al. , 2017 ) ; ( 2 ) using a denoising auto-encoder to derive an auxiliary loss for the generator ( Warde-Farley & Bengio , 2017 ) ; ( 3 ) combining variational auto-encoder and GANs to generate both vivid and diverse samples by balancing the objective of reconstructing the training data and confusing the discriminator ( Larsen et al. , 2016 ) . Our method AEIRL takes inspirations from EBGAN ( Zhao et al. , 2016 ) and utilizes an auto-encoder as the reward function and derives the efficient reconstruction error based surrogate reward signal and its corresponding objective functions . 3 BACKGROUND . A Markov decision process ( MDP ) is a tuple ( S , A , T , γ , P , r ) . In this tuple , S is a state space ; A is an action space ; T is a probability matrix for state transitions ; γ ∈ ( 0 , 1 ] is a discount factor ; P is a initial-state transition distribution ; and r : S × A → R is a reward function . Additionally , we also define a stochastic policy π , which is a mapping from states to probability distributions over actions . IRL infers the reward function using the expert demonstrations , which are assumed to be the observations of optimal behaviors ( Ng & Russell , 2000 ) . In general , IRL is formulated as a bi-level optimization process meaning that iteratively training the reward function and optimizing the policy . Assume that we are given an expert policy πE , IRL ( Ziebart et al. , 2008 ; 2010 ) fits a reward function from a family of functionsR with the optimization problem : min r∈R ( max π∈Π Eπ [ r ( s , a ) ] ) −EπE [ r ( s , a ) ] ( 1 ) Moreover , the expert policy πE will only be provided by a set of expert demonstrations , so the expected reward of πE is estimated by these trajectories . IRL looks for a reward function r ( s , a ) that assigns high values to the expert policy and low values to other policies . Therefore , it allows the expert policy to be found with a certain reinforcement learning procedure : RL ( r ) = argmax π∈Π Eπ [ r ( s , a ) ] ( 2 ) The RL process will induce the expert policy via maximizing the expected cumulative rewards . Meanwhile the entropy term can be optionally added to the reinforcement learning objective to encourage the exploration in policy searching . Typically , IRL models the reward function as a discriminator which can be easily overfit to the expert data . Also its training stability is questionable once confronted with even a little noise in the expert data . Therefore , we propose auto-encoding IRL to utilize an auto-encoder based reward function which achieve strong performance both on clean and noisy expert demonstrations . 4 AUTO-ENCODING INVERSE REINFORCEMENT LEARNING . 4.1 OVERVIEW . The reward function in GAIL is a discriminator , which attempts to assign high values to the regions near the expert demonstrations and assign low values to the other regions . However , this form of reward function could be easily overfitting to the expert data . Consider the CartPole balancing task , the state of the environment is a feature consisting of position , angle , angle ’ s rate , and cart velocity , while the action is moving left or right . Here , we assume that all the expert states ’ velocity is 2 for example . When the generated states ’ velocity is 1.9 and other dimensions of state action pairs are the same as the expert ’ s , the discriminator of GAIL would still give a low reward on these generated state action pairs . However it may actually perform very well on the goal of mimicking the expert ’ s behaviors . In other words , the reward function in GAIL on this example could easily overfit to the minor differences between the expert and the sampled data , while missing the underlying goal of the expert . In our paper , we propose an auto-encoder based reward function for inverse reinforcement learning . It utilizes the reconstruction error of the state action pairs to yield an informative reward signal . The reconstruction error based reward signal significantly retains the information of state action pairs , rather than focusing on the minor differences . Such a reward signal wouldn ’ t lead to overconfidence in distinguishing the expert and the generated samples . Recall the CartPole balancing example , the mean square error between states ’ velocity 1.9 and 2 is very small . And it could still feed a good reward to the agent under this situation . Thus , the reconstruction error based reward signal focuses on the full scale differences between the expert and generated state action pairs rather than the minor parts . This yields a much more informative reward signal . Figure 5 ( See in Section 5.4 ) , shows a more informative reward signal recovered with our method . Furthermore , the training process of the policy becomes smoother in general , which is shown in Figure 11 ( See in Appendix A.6 ) . The expert demonstrations usually contains noise bacause we sample these human trajectories with sensors and other devices in the real world . Therefore , adversarial imitation learning , such as GAIL , might be easily affected to learn the noisy expert behaviors , which are not the real intentions of the expert . The reward function under this situation could overfit to the noisy features . Recall the CartPole balancing example , we assume that the expert states ’ velocity is 2 but it tends to be 2 + δ due to the noisy sampling process . When the learned policy is good enough as the sampled states ’ velocity , the discriminator in GAIL still takes it as a bad policy , and therefore , it loses its efficacy , when learning from noisy demonstrations . Figure 2 depicts the denoising process of an autoencoder . When the auto-encoder is trained to minimize the averaged squared errors , the vector points approximately towards the nearest points on the manifold , since the auto-encoder estimates the cen- ter of mass of the clean points ( Goodfellow et al. , 2016 ) . Therefore , the reconstruction error of the auto-encoder can eliminate some effects when learning from noisy expert demonstrations and make the reward signal more robust . Recall the CartPole example , the reconstruction error of the auto-encoder is δ2 which doesn ’ t play an important role in the rewards for the agent . | This paper proposes the Auto-Encoding Inverse Reinforcement Learning (AEIRL) method for better imitation learning, especially in the presence of noisy expert demonstrations. The paper’s key insight is that auto-encoders can eliminate the effects of noise from expert demonstrations and provide a more stable reward signal based on the auto-encoder error. The experiments feature extensive analyses across standard benchmarks compared to a large number of baselines. | SP:189665e20164d1d5e70e4185afca18b2d0f8bdb9 |
Soteria: In search of efficient neural networks for private inference | 1 INTRODUCTION . Machine learning models are susceptible to several security and privacy attacks throughout their training and inference pipelines . Defending each of these threats require different types of security mechanisms . One important requirement is that the sensitive input data as well as the trained model parameters remains confidential during the inference time . In this paper , we focus on private computation of inference over deep neural networks , which is the setting of machine learning-as-a-service . Consider a server that provides a machine learning service ( e.g. , classification ) , and a client who needs to use the service for an inference on her data record . The server is not willing to share the proprietary machine learning model , underpinning the service , with any client . The clients are also unwilling to share their sensitive private data with the server . In addition , we assume the two parties do not trust , nor include , any third entity in the protocol . In this setting , our first objective is to design a secure protocol that protects the confidentiality of client data as well as the prediction results against the server who runs the computation . The second objective is to preserve the confidentiality of the model parameters with respect to the client . These can be achieved using secure multi-party computations . We consider an honest-but-curious threat model . A number of techniques provide data confidentiality while computing and thereby allow private computation . The techniques include computation on trusted processors such as Intel SGX Hunt et al . ( 2018a ) ; Ohrimenko et al . ( 2016 ) , and computation on encrypted data , using homomorphic encryption , garbled circuits , secret sharing , and hybrid cryptographic approaches that jointly optimize the efficiency of private inference on neural networks Yao ( 1986 ) ; Gentry ( 2009b ) ; Yao ( 1982 ) ; Brakerski et al . ( 2011 ) ; Beaver ( 1992 ) . To provide private inference with minimal performance overhead and accuracy loss , the dominant line of research involves adapting cryptographic functions to ( an approximation of ) a given fixed model Liu et al . ( 2017 ) ; Mohassel & Zhang ( 2017 ) ; Mishra et al . ( 2020 ) ; Rouhani et al . ( 2018 ) ; Riazi et al . ( 2019 ) ; Chandran et al . ( 2019 ) ; Juvekar et al . ( 2018 ) ; Riazi et al . ( 2018 ) . However , the alternative approach of searching or designing a network architecture for a given set of efficient and known cryptographic primitives has not received much attention Ghodsi et al . ( 2020 ) . Note that secure multi-party computations protect the confidentiality of the inputs to the protocol , and its intermediate computations , and can not prevent inference attacks that exploit the output of the protocol to infer information about its inputs . We thus emphasize that the protection against the indirect inference attacks that aim at reconstructing model parameters Tramèr et al . ( 2016 ) or its training data Shokri et al . ( 2017 ) , by exploiting model predictions , is not our goal and out of scope for this work . Our Contributions . We approach the problem of private inference from a novel perspective where instead of modifying cryptographic schemes to support neural network computations , we advocate modification of the training algorithms for efficient cryptographic primitives . In a nutshell , we advocate co-designing and co-optimizing models and cryptographic operations for efficient private inference . Research has shown that training algorithms for deep learning are inherently flexible with respect to their neural network architecture . This means that different network configurations can achieve similar level of prediction accuracy . We exploit this fact about deep learning algorithms and investigate the problem of optimizing deep learning algorithms to ensure efficient private computation . To this end , we present SOTERIA — an approach for constructing deep neural networks optimized for performance , accuracy and confidentiality . Although SOTERIA could leverage any of the available cryptographic primitives or their combination , we select garbled circuits as its main building block to address the confidentiality concern . Garbled circuits ( GC ) are known to be efficient and allow generation of constant depth circuits even for non-linear functions . We show that neural network algorithms can be optimized to efficiently execute garbled circuits while achieving high accuracy guarantees . We observe that the efficiency of evaluating an inference circuit depends on two key factors : the model parameters and the network structure . With this observation , we design a regularized architecture search algorithm to construct neural networks . SOTERIA selects optimal parameter sparsity and network structure with the objective to guarantee a balance between performance and model accuracy . We summarize our contributions below : • We propose a neural architecture search based approach called SOTERIA for designing models that guarantee a balance between prediction utility and computation cost while maintaining confidentiality of the model parameters and the input during inference . • We build SOTERIA models using garbled circuits for both ternary neural networks as well as the model architecture identified using network architecture search approach . • We evaluate SOTERIA on the benchmark datasets used by all the prior work , and observe that our approach provides the flexibility to tune utility and efficiency parameters based on the requirement of the underlying scenario . Our complete anonymous source code is available on github.1 The two fields of research on neural architecture search Liu et al . ( 2019 ) ; Zoph & Le ( 2016 ) ; Elsken et al . ( 2018 ) and secure multi-party computation Yao ( 1986 ) ; Gentry ( 2009b ) ; Yao ( 1982 ) ; Brakerski et al . ( 2011 ) ; Beaver ( 1992 ) are rapidly growing . SOTERIA enables us to take advantage of the advancements in these fields to systematically design more efficient algorithms for complex machine learning tasks . 2 SOTERIA . We design SOTERIA to automatically learn the model architecture and its connections so as to optimize the cost of private inference in addition to optimizing accuracy . This approach is different than simply fine-tunning or compressing a model , as we aim to include the cost of private computation as part of the objective of architecture learning and parameter learning of the model . SOTERIA is built on top of two well-established classes of machine learning algorithms : neural architecture search algorithms , and ternary neural network algorithms . Neural architecture search for efficient private inference . Architecture search algorithms for neural networks are designed to replace the manual design of complex deep models . The objective is to learn a model structure that gives high accuracy when trained on the training set Elsken et al . 1Source code of SOTERIA : https : //github.com/SoteriaAnonymous/Soteria ( 2018 ) ; Zoph & Le ( 2016 ) ; Pham et al . ( 2018 ) ; Liu et al . ( 2019 ) . They automatically construct the model architecture by stacking a number of cells . Each cell is a directed acyclic graph , where each node is a neural operation ( e.g. , convolution with different dimensions , maxpool , identity ) . During the search algorithm , we use stochastic gradient descent to continuously update the scores associated with different candidate operations for each connection in the internal graph of a cell , as to maximize the accuracy of the model on some validation dataset . In SOTERIA , we regularize the computation of the connection scores over candidate operations , with factor λ . We penalize each operation proportional to its computation and communication overhead when garbled . Larger values of λ result in models that prefer efficiency of private inference over accuracy . As we balance the trade-off between accuracy and performance , SOTERIA can construct models which by design satisfy the requirements of our system . Ternary ( Sparse Binary ) Neural Network . To build a system that enables efficient private inference , we aim to reduce the number of parameters contained in the model . One approach to accomplish this is to first train the model , and then compress it afterwards . However , this method might not result in the highest accuracy that could be achieved if we were to construct the model for this purpose from the beginning . To better align with our approach of constructing model architectures , we aim to learn model structures which are accurate , sparse , and are efficient when implemented as garbled circuits . More specifically , in SOTERIA , we train models with ternary parameters ( −1 , 0 , +1 ) Li et al . ( 2016 ) and binary activation functions . This allows us to leverage the benefits for BNNs , as discussed in Section B , with the benefit of a potentially smaller circuit ( due to the model ’ s sparsity ) . We incorporate Ternary Neural Networks , or TNNs , into our regularized architecture search algorithm to find cells containing only ternary convolution and max-pooling layers that operate on binary inputs and ternary parameters . 3 EMPIRICAL EVALUATION . In this section , we evaluate the efficiency of our method in two main ways . First , we show how using ternary neural networks on fixed model architectures , as used in the prior work Riazi et al . ( 2019 ) ; Mohassel & Zhang ( 2017 ) ; Rouhani et al . ( 2018 ) ; Juvekar et al . ( 2018 ) , can reduce the overhead of secure inference on sparse neural networks . Second , we present the performance of SOTERIA architectures , in which model complexity is optimized along with the model accuracy . Implementation . We first build a representation of the model and parameters in SystemVerilog and then convert the model into a digital circuit supported by TinyGarble Songhori et al . ( 2015 ) , and implementation of the Garbled Circuit ( GC ) protocol . We synthesize and optimize our circuit ( using Synopsys Design Compiler ) to use circuit elements supported by TinyGarble . In the first step , we design a collection of parameterized components ( notably dot product , and maxpool ) to use as building blocks in our architecture search algorithm . Each component is flexible and can efficiently accept arbitrary size input and output , and is composed to form the complete model . In a general setting , hardware-level code is typically straight-forward to generate . However , to enable TNNs with SOTERIA , we have to dynamically define the sparsity of the modules depending on the result of model training and architecture search . Along with the parameter data , we define the sparsity information which is used during generate phases in SystemVerilog to build the sparse network in hardware ( taking advantage of the 0-valued parameters ) . Altogether , this allows us to build and evaluate SOTERIA models . 3.1 EXPERIMENTAL SETUP . We need to evaluate SOTERIA on datasets which are used in the prior work to be able to perform a comprehensive comparison between the methods ( see Table 3 for all the configurations ) . We evaluate our work on MNIST and CIFAR10 image classification datasets , as they are the only benchmark datasets which are extensively used in the literature to evaluate the performance of cryptographically secure neural network schemes . We run our experiments on an AWS c5.2xlarge instance , running Ubuntu 18.04 LTS on an Intel Xeon 8124M at 3.0 GHz . We use PyTorch 1.3 Pyt , a deep learning framework to implement our architecture search algorithm and train the ternary models . We use Synopsys Design Compiler Syn , version L-2016.03-SP5-2 , to synthesize SystemVerilog code into the gate-level netlist . Our synthesis runs the TinyGarble gate library infrastructure2 . We compute the number of non-XOR gates in the generated boolean circuit netlist as a measure of its complexity . We measure the exact performance of SOTERIA as its runtime during the offline and online phases of the protocol , and its communication cost . We present the experimental setup of prior work and SOTERIA , including their CPU specification and link to available software codes , in Table 7 . We also present the details of all neural network architectures which we evaluate in this paper in Table 6 . | The paper develops a method for private neural-network-inference via utilization of Yao's garbled circuits (GC) protocol. In order to keep the computation complexity manageable - the main practical hurdle for GC - the paper proposes utilization of a neural network architecture search, coupled with restricting weights to ternary alphabet and binary activation. The neural architecture search is conducted via a variation of the "DARTS" approach of Liu et. al (ICLR 2019), that accounts for model complexity in the cost function. The paper performs extensive experiments against comparable protocols in their evaluation for MNIST and CIFAR10, and show improvements along some factors. | SP:8542f8a0aa1e9305f5e0424b325f8c7cba563f24 |
Soteria: In search of efficient neural networks for private inference | 1 INTRODUCTION . Machine learning models are susceptible to several security and privacy attacks throughout their training and inference pipelines . Defending each of these threats require different types of security mechanisms . One important requirement is that the sensitive input data as well as the trained model parameters remains confidential during the inference time . In this paper , we focus on private computation of inference over deep neural networks , which is the setting of machine learning-as-a-service . Consider a server that provides a machine learning service ( e.g. , classification ) , and a client who needs to use the service for an inference on her data record . The server is not willing to share the proprietary machine learning model , underpinning the service , with any client . The clients are also unwilling to share their sensitive private data with the server . In addition , we assume the two parties do not trust , nor include , any third entity in the protocol . In this setting , our first objective is to design a secure protocol that protects the confidentiality of client data as well as the prediction results against the server who runs the computation . The second objective is to preserve the confidentiality of the model parameters with respect to the client . These can be achieved using secure multi-party computations . We consider an honest-but-curious threat model . A number of techniques provide data confidentiality while computing and thereby allow private computation . The techniques include computation on trusted processors such as Intel SGX Hunt et al . ( 2018a ) ; Ohrimenko et al . ( 2016 ) , and computation on encrypted data , using homomorphic encryption , garbled circuits , secret sharing , and hybrid cryptographic approaches that jointly optimize the efficiency of private inference on neural networks Yao ( 1986 ) ; Gentry ( 2009b ) ; Yao ( 1982 ) ; Brakerski et al . ( 2011 ) ; Beaver ( 1992 ) . To provide private inference with minimal performance overhead and accuracy loss , the dominant line of research involves adapting cryptographic functions to ( an approximation of ) a given fixed model Liu et al . ( 2017 ) ; Mohassel & Zhang ( 2017 ) ; Mishra et al . ( 2020 ) ; Rouhani et al . ( 2018 ) ; Riazi et al . ( 2019 ) ; Chandran et al . ( 2019 ) ; Juvekar et al . ( 2018 ) ; Riazi et al . ( 2018 ) . However , the alternative approach of searching or designing a network architecture for a given set of efficient and known cryptographic primitives has not received much attention Ghodsi et al . ( 2020 ) . Note that secure multi-party computations protect the confidentiality of the inputs to the protocol , and its intermediate computations , and can not prevent inference attacks that exploit the output of the protocol to infer information about its inputs . We thus emphasize that the protection against the indirect inference attacks that aim at reconstructing model parameters Tramèr et al . ( 2016 ) or its training data Shokri et al . ( 2017 ) , by exploiting model predictions , is not our goal and out of scope for this work . Our Contributions . We approach the problem of private inference from a novel perspective where instead of modifying cryptographic schemes to support neural network computations , we advocate modification of the training algorithms for efficient cryptographic primitives . In a nutshell , we advocate co-designing and co-optimizing models and cryptographic operations for efficient private inference . Research has shown that training algorithms for deep learning are inherently flexible with respect to their neural network architecture . This means that different network configurations can achieve similar level of prediction accuracy . We exploit this fact about deep learning algorithms and investigate the problem of optimizing deep learning algorithms to ensure efficient private computation . To this end , we present SOTERIA — an approach for constructing deep neural networks optimized for performance , accuracy and confidentiality . Although SOTERIA could leverage any of the available cryptographic primitives or their combination , we select garbled circuits as its main building block to address the confidentiality concern . Garbled circuits ( GC ) are known to be efficient and allow generation of constant depth circuits even for non-linear functions . We show that neural network algorithms can be optimized to efficiently execute garbled circuits while achieving high accuracy guarantees . We observe that the efficiency of evaluating an inference circuit depends on two key factors : the model parameters and the network structure . With this observation , we design a regularized architecture search algorithm to construct neural networks . SOTERIA selects optimal parameter sparsity and network structure with the objective to guarantee a balance between performance and model accuracy . We summarize our contributions below : • We propose a neural architecture search based approach called SOTERIA for designing models that guarantee a balance between prediction utility and computation cost while maintaining confidentiality of the model parameters and the input during inference . • We build SOTERIA models using garbled circuits for both ternary neural networks as well as the model architecture identified using network architecture search approach . • We evaluate SOTERIA on the benchmark datasets used by all the prior work , and observe that our approach provides the flexibility to tune utility and efficiency parameters based on the requirement of the underlying scenario . Our complete anonymous source code is available on github.1 The two fields of research on neural architecture search Liu et al . ( 2019 ) ; Zoph & Le ( 2016 ) ; Elsken et al . ( 2018 ) and secure multi-party computation Yao ( 1986 ) ; Gentry ( 2009b ) ; Yao ( 1982 ) ; Brakerski et al . ( 2011 ) ; Beaver ( 1992 ) are rapidly growing . SOTERIA enables us to take advantage of the advancements in these fields to systematically design more efficient algorithms for complex machine learning tasks . 2 SOTERIA . We design SOTERIA to automatically learn the model architecture and its connections so as to optimize the cost of private inference in addition to optimizing accuracy . This approach is different than simply fine-tunning or compressing a model , as we aim to include the cost of private computation as part of the objective of architecture learning and parameter learning of the model . SOTERIA is built on top of two well-established classes of machine learning algorithms : neural architecture search algorithms , and ternary neural network algorithms . Neural architecture search for efficient private inference . Architecture search algorithms for neural networks are designed to replace the manual design of complex deep models . The objective is to learn a model structure that gives high accuracy when trained on the training set Elsken et al . 1Source code of SOTERIA : https : //github.com/SoteriaAnonymous/Soteria ( 2018 ) ; Zoph & Le ( 2016 ) ; Pham et al . ( 2018 ) ; Liu et al . ( 2019 ) . They automatically construct the model architecture by stacking a number of cells . Each cell is a directed acyclic graph , where each node is a neural operation ( e.g. , convolution with different dimensions , maxpool , identity ) . During the search algorithm , we use stochastic gradient descent to continuously update the scores associated with different candidate operations for each connection in the internal graph of a cell , as to maximize the accuracy of the model on some validation dataset . In SOTERIA , we regularize the computation of the connection scores over candidate operations , with factor λ . We penalize each operation proportional to its computation and communication overhead when garbled . Larger values of λ result in models that prefer efficiency of private inference over accuracy . As we balance the trade-off between accuracy and performance , SOTERIA can construct models which by design satisfy the requirements of our system . Ternary ( Sparse Binary ) Neural Network . To build a system that enables efficient private inference , we aim to reduce the number of parameters contained in the model . One approach to accomplish this is to first train the model , and then compress it afterwards . However , this method might not result in the highest accuracy that could be achieved if we were to construct the model for this purpose from the beginning . To better align with our approach of constructing model architectures , we aim to learn model structures which are accurate , sparse , and are efficient when implemented as garbled circuits . More specifically , in SOTERIA , we train models with ternary parameters ( −1 , 0 , +1 ) Li et al . ( 2016 ) and binary activation functions . This allows us to leverage the benefits for BNNs , as discussed in Section B , with the benefit of a potentially smaller circuit ( due to the model ’ s sparsity ) . We incorporate Ternary Neural Networks , or TNNs , into our regularized architecture search algorithm to find cells containing only ternary convolution and max-pooling layers that operate on binary inputs and ternary parameters . 3 EMPIRICAL EVALUATION . In this section , we evaluate the efficiency of our method in two main ways . First , we show how using ternary neural networks on fixed model architectures , as used in the prior work Riazi et al . ( 2019 ) ; Mohassel & Zhang ( 2017 ) ; Rouhani et al . ( 2018 ) ; Juvekar et al . ( 2018 ) , can reduce the overhead of secure inference on sparse neural networks . Second , we present the performance of SOTERIA architectures , in which model complexity is optimized along with the model accuracy . Implementation . We first build a representation of the model and parameters in SystemVerilog and then convert the model into a digital circuit supported by TinyGarble Songhori et al . ( 2015 ) , and implementation of the Garbled Circuit ( GC ) protocol . We synthesize and optimize our circuit ( using Synopsys Design Compiler ) to use circuit elements supported by TinyGarble . In the first step , we design a collection of parameterized components ( notably dot product , and maxpool ) to use as building blocks in our architecture search algorithm . Each component is flexible and can efficiently accept arbitrary size input and output , and is composed to form the complete model . In a general setting , hardware-level code is typically straight-forward to generate . However , to enable TNNs with SOTERIA , we have to dynamically define the sparsity of the modules depending on the result of model training and architecture search . Along with the parameter data , we define the sparsity information which is used during generate phases in SystemVerilog to build the sparse network in hardware ( taking advantage of the 0-valued parameters ) . Altogether , this allows us to build and evaluate SOTERIA models . 3.1 EXPERIMENTAL SETUP . We need to evaluate SOTERIA on datasets which are used in the prior work to be able to perform a comprehensive comparison between the methods ( see Table 3 for all the configurations ) . We evaluate our work on MNIST and CIFAR10 image classification datasets , as they are the only benchmark datasets which are extensively used in the literature to evaluate the performance of cryptographically secure neural network schemes . We run our experiments on an AWS c5.2xlarge instance , running Ubuntu 18.04 LTS on an Intel Xeon 8124M at 3.0 GHz . We use PyTorch 1.3 Pyt , a deep learning framework to implement our architecture search algorithm and train the ternary models . We use Synopsys Design Compiler Syn , version L-2016.03-SP5-2 , to synthesize SystemVerilog code into the gate-level netlist . Our synthesis runs the TinyGarble gate library infrastructure2 . We compute the number of non-XOR gates in the generated boolean circuit netlist as a measure of its complexity . We measure the exact performance of SOTERIA as its runtime during the offline and online phases of the protocol , and its communication cost . We present the experimental setup of prior work and SOTERIA , including their CPU specification and link to available software codes , in Table 7 . We also present the details of all neural network architectures which we evaluate in this paper in Table 6 . | The paper aims at developing an efficient neural network architecture with reduced computational cost under a cryptographic primitive where data on the client side and model on the server side are kept confidential. The motivation is that the existing works have focused on developing cryptographic techniques in a fixed network architecture and have not considered the neural network optimization perspective to enhance the efficiency of existing cryptographic computations. The paper then leverages the flexibility of the network structure while ensuring comparable accuracy and tries to find an architecture that provides a significant reduction in runtime while incurring negligible impact on prediction accuracy. Garbled circuits are chosen as the cryptographic primitive due to its compatibility with a wide range of computations, including non-linear functions. The paper then investigates how to reduce the private inference cost by incorporating two protocols: first one is neural architecture search for efficient private inference where a neural operation is penalized based on its computation and communication overhead, and the second one is to train models with ternary parameters which allows reducing the number of model parameters by introducing sparsity. Sparsity leads to a reduction in computation and communication overhead. However, to maintain a certain level of accuracy, the network needs to be scaled which incurs an increase in the runtime. Accordingly, a trade-off between efficiency and accuracy has been demonstrated with experiments on the CIFAR and MNIST datasets for varying regularization parameter and scaling factor. | SP:8542f8a0aa1e9305f5e0424b325f8c7cba563f24 |
GenTAL: Generative Denoising Skip-gram Transformer for Unsupervised Binary Code Similarity Detection | 1 INTRODUCTION . Reverse engineering is the process of analyzing a given binary program without its source code . It routinely requires experienced analysts and demands a huge amount of manual effort . This process is essential in many critical security problems , such as malware analysis , vulnerability discovery , and Advanced Persistent Threat ( APT ) tracking . Binary similarity detection is an important solution to reduce the amount of manual effort , by detecting known parts and pieces in the target under investigation . Binary-level similarity detection is more difficult than source-level similarity detection , because most of the semantic-rich literals and structures , such as constants , function names , and variable names , are altered or no longer available in the form of assembly language . The data structures are also lost , due to the compilation process , since the debugging information is typically stripped in commercial-off-the-shelf programs . Many existing approaches rely on manual feature engineering to model the semantics of assembly code . For example , a fragment of assembly code can be modeled into a numeric vector based on ( 1 ) the ratio of algorithmic operations , ( 2 ) the ratio of transferal operations , and ( 3 ) the ratio of function calls [ 1 , 2 , 3 , 4 ] . Alternatively , assembly code can be modeled as word-based n-grams [ 5 ] . These approaches can not capture the rich semantics carried in the assembly code . Some other approaches rely on symbolic constraint solving to measure the logical relationship between each pair of code fragments [ 6 , 7 ] . However , these methods are computationally expensive and do not scale well . In comparison , recent deep learning approaches have shown to be more effective and robust to detect similar binary code . Typically , a neural network model is proposed and coupled with a contrastive loss function [ 8 , 9 , 10 , 11 ] . The network is trained with limited pairs of assembly code with 0-1 labels . Asm2Vec [ 12 ] extends the idea of Word2Vec [ 13 ] and PVDM [ 14 ] , and follows an unsupervised paradigm to learn the function representation . Although deep learning has been proven to be effective , there are still many practical barriers that prevent the aforementioned approaches from a wide adoptation . First of all , most approaches except Asm2Vec uses a supervised paradigm and learns the model parameters by directly decreasing the distance between similar assembly code pairs in the training set . While supervised models potentially provide better performance in their trained tasks , they usually suffer when a new task is introduced . For example , [ 10 ] learns the tasks for cross-optimization level detection and crossarchitecture detection on ARM and X86-64 . However , it does not guarantee cross-compiler and cross-OS binaries performance , thus it lacks robustness . On the other hand , Asm2Vec , which uses an unsupervised approach , is mainly based on PVDM . This limits its stability during the inference , since it needs multiple rounds to accumulate the gradient . Moreover , Word2Vec based models are less context-dependent than the state-of-the-art models such as BERT [ 15 ] . They carry less semantics and discriminative power for the downstream tasks . To illustrate this , Fig . 1 shows 4 pieces of compiled binary code in assembly code , all generated from the same source code . The choice of compilers and optimization levels mainly results in performance differences , while the obfuscated code can be generated for various reasons , both benign and malicious . The semantics of the programs are similar or the same , but the code itself can look drastically different from one another . A supervised model may be able to learn to detect similar binaries cross-compiled , but changing the optimization level or adding obfuscation can create unseen patterns , leading to detection failures . To learn a compact representation of assembly code through an unsupervised language model , one can use BERT and other Transformer-based models , since they have shown their effectiveness in modeling natural language semantics . However , assembly code , despite the fact that it is missing many semantic-rich literals present in the original source code , complies with a more static syntax . The original Transformer architecture , with the flattened sequence of position-embedded subwords , can not address the position invariance of the assembly code instructions ( see Figure 1 ) . The same instructions may be placed in a different place , but contribute to similar program semantics . Additionally , the original masked language model in BERT distribute the memory of reconstructing the masked tokens into hidden layers of different timestamps . This is against our goal , which is to have a single compact vector representation for all code . With the above observations , we propose GenTAL , a generative denoising skip-gram Transformer for assembly language representation learning . It follows an unsupervised learning paradigm for binary code similarity detection . We model assembly code instructions through the Transformer encoder in a way that their syntax is preserved . Inspired by the denoising autoencoders , we propose to combine the originally time-distributed memory of the masked language model into a single dense vector , and leverage a skip-gram like structure for masked instructions recovery . This allows the model to embed the semantics of the assembly code into a very compact representation for a more effective similarity detection . Our contributions are as follows : • We propose a new Transformer-based unsupervised language model for assembly code instructions , the form of human-understandable binary code . The model follows the syntax of assembly code and is able to address the instruction position invariance issue . • We proposed to combine skip-gram style reconstruction loss with the masked language model to condense the originally time-distributed memory into a single compact embedding vector . This design simulates a denoising autoencoder and provides a unified representation of semantics . • We conduct experiments on five different scenarios for code similarity detection and compare our methods against traditional TFIDF based and state-of-the-art machine learning-based methods . We show that GenTAL is more robust and able to outperform the baselines in all applications . 2 RELATED WORK . Binary Code Similarity . Assembly code can be regarded as a natural language in certain aspects . For this reason , NLP techniques are often used as encoders . Other ML-based binary code similarity detection works use different approaches for representation learning . Supervised learning approaches , such as Gemini [ 8 ] , Diff [ 9 ] , [ 10 ] , and BinDeep [ 16 ] all use siamese networks to reduce loss and use cosine distance to compute similarity . Gemini manually extracts block features and feeds them into a graph neural network . Diff feeds raw byte input into CNN for learning function embedding , which lacks the modeling of block semantics . Yu , et al . [ 10 ] extend the BERT model for code semantics learning by introducing the same graph prediction task ( SGP ) and graph classification task ( GC ) . They also train a graph neural network for assembly code representation learning and a CNN on the adjacency matrix for additional order-aware embedding . BinDeep uses Instruction2Vec [ 17 ] with LSTM for instruction and blocks embedding to enable sequence-aware modeling . Asm2Vec [ 12 ] follows an unsupervised paradigm and uses PVDM for block embedding . Other traditional graph matching methods include BinDiff [ 18 ] and Binslayer [ 19 ] . These use static graph patterns . As a result , the performance can be severely hindered with any changes in graphs , which often happen with different compiler settings and obfuscation . Unsupervised Language Models . BERT [ 15 ] is the state-of-the-art language model for NLP pretraining based on Transformer [ 20 ] architecture . Transformer can learn the contextual and sequential information within a sentence , while also maintaining multiple levels of semantics . It trains in parallel and is thus faster compared to RNN-based models . There are several variations of the original BERT model . ELECTRA [ 21 ] builds on top of BERT and adds a discriminator that predicts which tokens are originally masked . Albert [ 22 ] achieves similar performance to BERT , while using fewer parameters through factorized embeddings parameterization and cross-layer weight sharing . RoBERTa [ 23 ] further up-trains BERT with heavier parameters and discards the next sentence prediction ( NSP ) task from BERT . Word2Vec is also an unsupervised learning technique for language models , which uses Skip-gram or Continuous-Bag-Of-Words ( CBOW ) to learn word embedding based on a fixed length of windowed context . Doc2Vec extends Word2Vec and adds another ID token for document/sentence representation with the Distributed Memory ( PVDM ) and Distributed Bag-Of-Words ( DBOW ) variance . Tokenization is also an important task in language models , since the performance can vary significantly depending on the quality of the tokenizers . There are different levels of tokenization , such as character level and subword level . Byte Pair Encoding ( BPE ) [ 24 , 25 ] generates subword vocabulary by learning the frequency of characters in a large corpus . Unigram [ 26 ] learns the language model by optimizing the word occurrence given a sequence and then builds the vocabulary by sorting the loss of subwords . These methods are used to combat the out-of-vocabulary ( OOV ) issues , which many large-scale language models can struggle with . 3 GENERATIVE DENOISING SKIP-GRAM TRANSFORMER . In this section , we describe the details of GenTAL with respect to three major components : preprocessing , including instruction masking and encoding ; the code fragment encoding using Transformer ; and the reconstruction loss through a skip-gram style approach . The overall framework of GenTAL is shown in Fig . 2 . Given a sequence of assembly instructions , which is generated from disassembling the binary data into assembly code , we preprocess the assembly code first to extract the instructions as subword sequences and apply masking . Then each subword is mapped into subword embedding and coupled with an instruction-level positional embedding . After being merged as the instruction level embeddings , we feed them into a Transformer encoder and obtain the CLS step hidden layer output , which is finally position-encoded again to recover the masked instructions . We formally define GenTAL ’ s goal as to learn an encoding function G that maps a sequence of instructions into a b-dimensional space C → Rb where its semantics will be preserved by optimizing : ∑ j ∑ i P ( ti|pj , G ( C ) ) ( 1 ) Here C is the sequence of assembly code , pj is the position of the j-th masked instruction , and ti is the i-th subword of that j-th instruction . Simply put , we reconstruct the full instruction rather than individual disconnected subwords or the original tokens . 3.1 INSTRUCTION PREPROCESSING , MASKING , AND RAGGED TENSOR . An assembly instruction contains an operation and operands . We first treat each instruction as a plain text sequence and clean up the assembly code by removing address-specific literals such as loc_DD20 . These literals depend on the base address of the binary and can be changed when generated in a slightly different environment . Next , we pre-tokenize the assembly code following the syntax of the assembly language as this can help us alleviate the out-of-vocabulary ( OOV ) issues . For example , instead of treating [ rbg+var_19 ] as a single token , we break it into [ , rbg , + , var , 19 ] . After , we train an unigram [ 26 , 27 ] model on the training corpus for subword tokenization . This can further mitigate the OOV issue , especially on long constants . For example , 0x00fffff1 will be broken down as 0x00 , ffff , and f1 . If any compiler or obfuscator manipulates this constant by shifting or padding , the subwords can still be matched . Given a sequence of assembly instructions in a linear order , we follow the Masked Language Model used in BERT . However , instead of masking individual subwords or subwords that constitute an original token , we mask a full instruction , since there exists a strong correlation among subwords presented in a single instruction . For example , operand PUSH will very likely be followed by a stack register . XOR is also used quite often with the same register as its two operands . This issue will make the reconstruction task too easy . Instead , we mask the complete instruction . Unlike many approaches that treat assembly instructions as a flat sequence of words , we keep the original structure and dimension of [ blocks , instructions , tokens ] as input to preserve the execution logic and carry better semantics . In the past , it was very difficult to model the data structure this way , due to the padding issue where some instructions have a significantly larger number of subwords . Our implementation uses the recent ragged tensor interface in TensorFlow , which allows variable sequence length over the subword dimension without padding . Besides masking , we add a CLS token at the start of the assembly instructions , which acts as a condensed vector for a collective representation . CLS is left out for masking . | ==+== A. Paper summary This paper proposes a new model architecture to encode assembly code. By using a new reconstruction loss with the new architecture, the paper achieves better fine-tuning results on downstream binary code similarity detection tasks. | SP:db45d6d64491b9bf610e493af2247d9f4d578d68 |
GenTAL: Generative Denoising Skip-gram Transformer for Unsupervised Binary Code Similarity Detection | 1 INTRODUCTION . Reverse engineering is the process of analyzing a given binary program without its source code . It routinely requires experienced analysts and demands a huge amount of manual effort . This process is essential in many critical security problems , such as malware analysis , vulnerability discovery , and Advanced Persistent Threat ( APT ) tracking . Binary similarity detection is an important solution to reduce the amount of manual effort , by detecting known parts and pieces in the target under investigation . Binary-level similarity detection is more difficult than source-level similarity detection , because most of the semantic-rich literals and structures , such as constants , function names , and variable names , are altered or no longer available in the form of assembly language . The data structures are also lost , due to the compilation process , since the debugging information is typically stripped in commercial-off-the-shelf programs . Many existing approaches rely on manual feature engineering to model the semantics of assembly code . For example , a fragment of assembly code can be modeled into a numeric vector based on ( 1 ) the ratio of algorithmic operations , ( 2 ) the ratio of transferal operations , and ( 3 ) the ratio of function calls [ 1 , 2 , 3 , 4 ] . Alternatively , assembly code can be modeled as word-based n-grams [ 5 ] . These approaches can not capture the rich semantics carried in the assembly code . Some other approaches rely on symbolic constraint solving to measure the logical relationship between each pair of code fragments [ 6 , 7 ] . However , these methods are computationally expensive and do not scale well . In comparison , recent deep learning approaches have shown to be more effective and robust to detect similar binary code . Typically , a neural network model is proposed and coupled with a contrastive loss function [ 8 , 9 , 10 , 11 ] . The network is trained with limited pairs of assembly code with 0-1 labels . Asm2Vec [ 12 ] extends the idea of Word2Vec [ 13 ] and PVDM [ 14 ] , and follows an unsupervised paradigm to learn the function representation . Although deep learning has been proven to be effective , there are still many practical barriers that prevent the aforementioned approaches from a wide adoptation . First of all , most approaches except Asm2Vec uses a supervised paradigm and learns the model parameters by directly decreasing the distance between similar assembly code pairs in the training set . While supervised models potentially provide better performance in their trained tasks , they usually suffer when a new task is introduced . For example , [ 10 ] learns the tasks for cross-optimization level detection and crossarchitecture detection on ARM and X86-64 . However , it does not guarantee cross-compiler and cross-OS binaries performance , thus it lacks robustness . On the other hand , Asm2Vec , which uses an unsupervised approach , is mainly based on PVDM . This limits its stability during the inference , since it needs multiple rounds to accumulate the gradient . Moreover , Word2Vec based models are less context-dependent than the state-of-the-art models such as BERT [ 15 ] . They carry less semantics and discriminative power for the downstream tasks . To illustrate this , Fig . 1 shows 4 pieces of compiled binary code in assembly code , all generated from the same source code . The choice of compilers and optimization levels mainly results in performance differences , while the obfuscated code can be generated for various reasons , both benign and malicious . The semantics of the programs are similar or the same , but the code itself can look drastically different from one another . A supervised model may be able to learn to detect similar binaries cross-compiled , but changing the optimization level or adding obfuscation can create unseen patterns , leading to detection failures . To learn a compact representation of assembly code through an unsupervised language model , one can use BERT and other Transformer-based models , since they have shown their effectiveness in modeling natural language semantics . However , assembly code , despite the fact that it is missing many semantic-rich literals present in the original source code , complies with a more static syntax . The original Transformer architecture , with the flattened sequence of position-embedded subwords , can not address the position invariance of the assembly code instructions ( see Figure 1 ) . The same instructions may be placed in a different place , but contribute to similar program semantics . Additionally , the original masked language model in BERT distribute the memory of reconstructing the masked tokens into hidden layers of different timestamps . This is against our goal , which is to have a single compact vector representation for all code . With the above observations , we propose GenTAL , a generative denoising skip-gram Transformer for assembly language representation learning . It follows an unsupervised learning paradigm for binary code similarity detection . We model assembly code instructions through the Transformer encoder in a way that their syntax is preserved . Inspired by the denoising autoencoders , we propose to combine the originally time-distributed memory of the masked language model into a single dense vector , and leverage a skip-gram like structure for masked instructions recovery . This allows the model to embed the semantics of the assembly code into a very compact representation for a more effective similarity detection . Our contributions are as follows : • We propose a new Transformer-based unsupervised language model for assembly code instructions , the form of human-understandable binary code . The model follows the syntax of assembly code and is able to address the instruction position invariance issue . • We proposed to combine skip-gram style reconstruction loss with the masked language model to condense the originally time-distributed memory into a single compact embedding vector . This design simulates a denoising autoencoder and provides a unified representation of semantics . • We conduct experiments on five different scenarios for code similarity detection and compare our methods against traditional TFIDF based and state-of-the-art machine learning-based methods . We show that GenTAL is more robust and able to outperform the baselines in all applications . 2 RELATED WORK . Binary Code Similarity . Assembly code can be regarded as a natural language in certain aspects . For this reason , NLP techniques are often used as encoders . Other ML-based binary code similarity detection works use different approaches for representation learning . Supervised learning approaches , such as Gemini [ 8 ] , Diff [ 9 ] , [ 10 ] , and BinDeep [ 16 ] all use siamese networks to reduce loss and use cosine distance to compute similarity . Gemini manually extracts block features and feeds them into a graph neural network . Diff feeds raw byte input into CNN for learning function embedding , which lacks the modeling of block semantics . Yu , et al . [ 10 ] extend the BERT model for code semantics learning by introducing the same graph prediction task ( SGP ) and graph classification task ( GC ) . They also train a graph neural network for assembly code representation learning and a CNN on the adjacency matrix for additional order-aware embedding . BinDeep uses Instruction2Vec [ 17 ] with LSTM for instruction and blocks embedding to enable sequence-aware modeling . Asm2Vec [ 12 ] follows an unsupervised paradigm and uses PVDM for block embedding . Other traditional graph matching methods include BinDiff [ 18 ] and Binslayer [ 19 ] . These use static graph patterns . As a result , the performance can be severely hindered with any changes in graphs , which often happen with different compiler settings and obfuscation . Unsupervised Language Models . BERT [ 15 ] is the state-of-the-art language model for NLP pretraining based on Transformer [ 20 ] architecture . Transformer can learn the contextual and sequential information within a sentence , while also maintaining multiple levels of semantics . It trains in parallel and is thus faster compared to RNN-based models . There are several variations of the original BERT model . ELECTRA [ 21 ] builds on top of BERT and adds a discriminator that predicts which tokens are originally masked . Albert [ 22 ] achieves similar performance to BERT , while using fewer parameters through factorized embeddings parameterization and cross-layer weight sharing . RoBERTa [ 23 ] further up-trains BERT with heavier parameters and discards the next sentence prediction ( NSP ) task from BERT . Word2Vec is also an unsupervised learning technique for language models , which uses Skip-gram or Continuous-Bag-Of-Words ( CBOW ) to learn word embedding based on a fixed length of windowed context . Doc2Vec extends Word2Vec and adds another ID token for document/sentence representation with the Distributed Memory ( PVDM ) and Distributed Bag-Of-Words ( DBOW ) variance . Tokenization is also an important task in language models , since the performance can vary significantly depending on the quality of the tokenizers . There are different levels of tokenization , such as character level and subword level . Byte Pair Encoding ( BPE ) [ 24 , 25 ] generates subword vocabulary by learning the frequency of characters in a large corpus . Unigram [ 26 ] learns the language model by optimizing the word occurrence given a sequence and then builds the vocabulary by sorting the loss of subwords . These methods are used to combat the out-of-vocabulary ( OOV ) issues , which many large-scale language models can struggle with . 3 GENERATIVE DENOISING SKIP-GRAM TRANSFORMER . In this section , we describe the details of GenTAL with respect to three major components : preprocessing , including instruction masking and encoding ; the code fragment encoding using Transformer ; and the reconstruction loss through a skip-gram style approach . The overall framework of GenTAL is shown in Fig . 2 . Given a sequence of assembly instructions , which is generated from disassembling the binary data into assembly code , we preprocess the assembly code first to extract the instructions as subword sequences and apply masking . Then each subword is mapped into subword embedding and coupled with an instruction-level positional embedding . After being merged as the instruction level embeddings , we feed them into a Transformer encoder and obtain the CLS step hidden layer output , which is finally position-encoded again to recover the masked instructions . We formally define GenTAL ’ s goal as to learn an encoding function G that maps a sequence of instructions into a b-dimensional space C → Rb where its semantics will be preserved by optimizing : ∑ j ∑ i P ( ti|pj , G ( C ) ) ( 1 ) Here C is the sequence of assembly code , pj is the position of the j-th masked instruction , and ti is the i-th subword of that j-th instruction . Simply put , we reconstruct the full instruction rather than individual disconnected subwords or the original tokens . 3.1 INSTRUCTION PREPROCESSING , MASKING , AND RAGGED TENSOR . An assembly instruction contains an operation and operands . We first treat each instruction as a plain text sequence and clean up the assembly code by removing address-specific literals such as loc_DD20 . These literals depend on the base address of the binary and can be changed when generated in a slightly different environment . Next , we pre-tokenize the assembly code following the syntax of the assembly language as this can help us alleviate the out-of-vocabulary ( OOV ) issues . For example , instead of treating [ rbg+var_19 ] as a single token , we break it into [ , rbg , + , var , 19 ] . After , we train an unigram [ 26 , 27 ] model on the training corpus for subword tokenization . This can further mitigate the OOV issue , especially on long constants . For example , 0x00fffff1 will be broken down as 0x00 , ffff , and f1 . If any compiler or obfuscator manipulates this constant by shifting or padding , the subwords can still be matched . Given a sequence of assembly instructions in a linear order , we follow the Masked Language Model used in BERT . However , instead of masking individual subwords or subwords that constitute an original token , we mask a full instruction , since there exists a strong correlation among subwords presented in a single instruction . For example , operand PUSH will very likely be followed by a stack register . XOR is also used quite often with the same register as its two operands . This issue will make the reconstruction task too easy . Instead , we mask the complete instruction . Unlike many approaches that treat assembly instructions as a flat sequence of words , we keep the original structure and dimension of [ blocks , instructions , tokens ] as input to preserve the execution logic and carry better semantics . In the past , it was very difficult to model the data structure this way , due to the padding issue where some instructions have a significantly larger number of subwords . Our implementation uses the recent ragged tensor interface in TensorFlow , which allows variable sequence length over the subword dimension without padding . Besides masking , we add a CLS token at the start of the assembly instructions , which acts as a condensed vector for a collective representation . CLS is left out for masking . | This paper proposes an enhanced transformer for dealing with the generation of assembly code. This is achieved by feeding the model with one instruction at the time (as BERT did), and hence each of these is divided into sub-words in a position-preserving way. The authors shows the efficacy of GenTAL against many other work proposed in literature, showing that their solution is able to better grasp all the possible different level of optimization applied by the compiler (O1 to O3). | SP:db45d6d64491b9bf610e493af2247d9f4d578d68 |
An Application of Pseudo-log-likelihoods to Natural Language Scoring | Language models built using semi-supervised machine learning on large corpora of natural language have very quickly enveloped the fields of natural language generation and understanding . In this paper we apply a zero-shot approach independently developed by a number of researchers now gaining recognition as a significant alternative to fine-tuning for evaluation on common sense tasks . A language model with relatively few parameters and training steps ( albert-xxlarge-v2 ) compared to a more recent language model ( T5 ) can outperform it on a recent large data set ( TimeDial ) , while displaying robustness in its performance across a similar class of language tasks . Surprisingly , this result is achieved by using a hyperparameter-free zero-shot method with the smaller model , compared to finetuning to the larger model . We argue that robustness of the smaller model ought to be understood in terms of compositionality , in a sense that we draw from recent literature on a class of similar models . We identify a practical cost for our method and model : high GPU-time for natural language evaluation . The zero-shot measurement technique that produces remarkable stability , both for ALBERT and other BERT variants , is an application of pseudo-log-likelihoods to masked language models for the relative measurement of probability for substitution alternatives in forced choice language tasks such as the Winograd Schema Challenge , Winogrande , CommonsenseQA , and others . One contribution of this paper is to bring together a number of similar , but independent strands of research . We produce some absolute state-of-the-art ( SOTA ) results for common sense reasoning in binary choice tasks , performing better than any published result in the literature , including fine-tuned efforts . In others our results are SOTA relative to published methods similar to our own – in some cases by wide margins , but below SOTA absolute for fine-tuned alternatives . In addition we show a remarkable consistency of the model ’ s performance under adversarial settings , which we argue is best explained by the model ’ s compositionality of representations . 1 INTRODUCTION . Computational linguistics has made major strides in the adoption of machine learning techniques applied to unstructured corpora consisting of human generated natural language text . For example , some methods take advantage of the frequencies of words in natural human text to productive ends ( Collobert et al. , 2011 ; Mikolov et al. , 2013a ; b ; Peters et al. , 2018 ) . N-gram models providing frequencies of pairs , triples , etc . of words in natural text provided further gains on related tasks . However , a very influential paper in 2018 , signalling a major shift in the application of machine learning to natural text , advocated for an architecture that has “ a more structured memory for handling long-term dependencies in text , compared to alternatives like recurrent networks , resulting in robust transfer performance across diverse tasks. ” ( Radford et al. , 2018 ) This culminated in the application of the Transformer ( Vaswani et al. , 2017 ) to the creation of representations through language prediction tasks , motivated by the importance of long-term dependencies in natural language text for not only the choice of model , but also training data ; as Radford et al . note , “ Crucially , [ BooksCorpus ] , a common corpus for a multitude of emerging transformer models , contains long stretches of contiguous text , which allows the generative model to learn to condition on long-range information. ” Why might ‘ long stretches of contiguous text ’ , via learning conditioned on that text , lead to success at diverse tasks like natural language inference , question answering , sentence similarity , and classification ( Radford et al. , 2018 , Table 1 ) ? After all , these tasks typically involve very short , independent sections of text . Solving the Winograd Schema Challenge ( WSC ) Levesque et al . ( 2012 ) seems to require vast amounts of common sense knowledge , and the job of learning long-term dependencies was supposed to help replacing actual knowledge of the world with the proxy knowledge that humangenerated text provides . Although language models do well at common sense benchmarks through fine-tuning , when we evaluate them using standard fine-tuning methods with a small , admittedly unreliable ‘ quick-probe ’ , they do not generalize well to new samples that we offer . On the other hand , a recent zero-shot technique using an idiosyncratic model with several unique architectural and training features shows remarkable consistency and absolute performance on our unreliable quick probe , but also on a family of challenging common sense problems . 1.1 SUMMARY OF CONTRIBUTIONS : . In this paper we investigate the properties of a language model with parameter sharing : albert-xxlarge-v2 , small in both parameter count and pre-training corpus relative to the field of language models generally . We find that pseudo-log-likelihoods ( PLL ) and token-normalied PLLs ( NormPLL ) methods for scoring natural language with this model performs at a mixture of outright state-of-the-art ( SOTA ) performance and robust , but SOTA performance just for zero-shot methods at a series of recent binary common sense language tasks . The combination of model and method is remarkable consistent , scoring around 75-80 % under conditions both designed to be adversarial against language models . The approach is also robust against accidental processes that reduce zero-shot performance in language models generally , such as semantically and syntactically noisy data . To our knowledge , our results are SOTA for any approach to the TimeDial ( Qin et al. , 2021 ) dataset ; SOTA for any zero-shot approach to solving the train-xl split of Winogrande ( Sakaguchi et al. , 2020 ) ; SOTA for an average score on the perturbed Winograd set ( Abdou et al. , 2020 ) ; and , SOTA for any zero-shot approach to WSC , with the exception of a reported result in which training and testing sets were mixed . In other cases , our approach is SOTA for zero-shot and competitive with fine-tuned approaches . We provide an explanation for the results and their significance . 2 RELATED WORK . 2.1 BIDIRECTIONAL VS. UNIDIRECTIONAL MODELS . The two most recent GPT papers , ‘ Language Models are Unsupervised Multitask Learners ’ ( Radford et al. , 2019 ) and ‘ Language models are few-shot learners ’ ( Brown et al. , 2020 ) identify in their titles the nature or purpose of machine learning models for language with the purposes they put their GPT variants to in their paper . Emphatic titles aside , the most influential fine-tuning papers also advocate for few- and zero-shot results . A more important differentiator between GPT style and NormPLL-suitable models is the significant benefit of a bidirectional masked objective for success with PLL scoring methods over single-directional masked objectives , as Salazar et al . ( 2020 ) , Zhou et al . ( 2020 ) , and Ma et al . ( 2021 ) show . 2.2 THE ‘ QUICK-PROBE ASSUMPTION ’ . In his discussion of Winograd schemas , Dennett defines what he calls the ‘ quick-probe assumption ’ : success on a few Winograd schemas in a Turing test-style evaluation ought to indicate generalizability of a computer ’ s ability to make common sense judgements , not merely success at the few examples like it , or examples like it in some superficial way only ( Dennett , 1984 ) . One of us , skeptical of fine-tuning for success at tasks like the Winograd Schema Challenge and similar problems , hand-made a set of 20 sentence pairs1 prior to collaboration on the present paper . 1https : //anonymous.4open.science/r/NotSoFineTuning-4620/ winogradversarial/examples.json We have reproduced the dataset in its entirety in Appendix A.2 . The purpose of this set of Winograd-style pairs is to test whether fine-tuning can be attacked directly , as follows . Suppose a training set contains multiple complete pairs , such that reference is shifted every time a sentence has a twin that is different only in some modifier or short phrase . Then perhaps a pair in which reference isn ’ t shifted will be scored poorly , if the model is spuriously using the modifier trick . This can be an exploited trick ( at least in principle ) if , for example , one member of a Winograd schema pair is in the train set , and the other is in the test set 2 . Here is an example from this small , hand-made data set : 1 . This is why people are supposed to take salt tablets when < mask > sweat a lot . Answers : people , salt tablets 2 . This is why people are supposed to take salt tablets when < mask > sweat a little . Answers : people , salt tablets By substituting the answers in for the mask above we get two pairs of sentences for a model to score , or assess the relative likelihood of , resulting in two questions of the suitcase/trophy example above . The correct answer above for both examples is ‘ people ’ , since salt tablets don ’ t sweat . In Table 1 we compare the performance of a variety of models that have been fine-tuned on the Winogrande , a scaled WSC-variant debiased against RoBERTa ( Sakaguchi et al. , 2020 ) . We find that the BERT family of language models generally does poorly on this data set when evaluating its fine-tuned discriminator on the data set . On the other hand , using a method of scoring sentences using language models in a manner which is free of hyperparameters , we also score the models – in the second column , there is no training beyond the objective functions of the models during semi-supervised pre-training . Notice that a single model outperforms the others : albert-large . The albert-xxlarge-v2 variant scores an impressive 80 % on the Winogradversarial dataset we present . It is a well-defined question to ask whether this high value for that last variant is a statistical fluke , or evidence of a robust ability to score binary common sense sentence pairs at a rate of around 80 % . An anonymous reviewer points out that , on 20 coin flips , there is a greater than 25 % chance of achieving more than 12 heads , or 60 % accuracy . Therefore the results of Table 1 are not particularly meaningful . We agree : these results are not particularly meaningful , by themselves . This paper argues on the basis of new results on very large binary choice and similar data sets that the 80 % score achieved by the albert-xxlarge-v2 is due to its compositionality of representation and corresponding systematicity of behaviour . We also cite independent research that supports our interpretation of our results . 2.3 HYPERPARAMETERS AND ZERO-SHOT . A broad survey of machine learning research concludes that demonstrating an ability to innovate in model construction dominates work done in data set collection and cleaning ( Sambasivan et al. , 2021 ) . Resource constrained researchers are able to use platforms like huggingface to leverage pretraining with novel models contributed by other researchers . PLLs applied to language models for common sense tasks present both opportunities and challenges distinct from this standard approach 2This is in fact turns out to be the case in the WNLI dataset which is part of the general natural understaning benchmark of SuperGLUE Wang et al . ( 2019 ) to distributed work in NLP . Predictive language model scoring using a pre-trained deep learning model has been used since at least ( Linzen et al. , 2016 ) , although as we discuss below PLLs seem to display unique benefits for architectures with idiosyncratic features such as parameter sharing and bidrectionality . Despite its nascent application , scholarly literature has already recognized availability of GPU time for researchers as a limiting factor in applying NormPLLs ( our term ) for large language models . Laban et al . ( 2021 ) explicitly limit their investigation to the smallest , ‘ base ’ models of BERT and RoBERTa in their published results . As we demonstrate below , ALBERT requires an order of magnitude more GPU time for NormPLL scoring than BERT , but we nevertheless provide results for a number of important data sets using the ‘ xxlarge ’ variant of ALBERT . In Appendix C we compare the approach we share here to a wide range of common sense tasks , including COPA ( Roemmele et al. , 2011 ) . The website associated with the COPA dataset contains an ethics injunction for its users with specific imperatives : researchers should not peek at the data set before they evaluating their model on it ; and , researchers should only evaluate their method on COPA once.3 Our zero-shot method scores an impressive 80 % on COPA ; as we argue below , sensitivity of the method to even extra spaces around punctuation marks necessitates a certain amount of familiarity with the data . We see critical views of fine-tuning in industry . A recent white paper eschews fine-tuning and even few-shot evaluation for assessing the representational quality of a natural language model because of their potential for spurious results.4 Research parallelism can produce spurious results simply as a consequence of the number of hypotheses tested.5 It is beyond the scope of this paper , but there are number of methods , such as Bonferroni correction , that can be used in settings of multiple hypothesis testing . Regardless of one ’ s priors for the compliance of fine-tuning of language models by the research community with statistical best practices , one may find zero-shot measurements of language models more reliable simply because of the fewer points of possible p-hacking , intentional or otherwise . | The paper makes multiple independent contributions, including: 1. The addition of a new adversarial common-sense reasoning dataset dubbed “Winogradversarial” 2. The explicit gathering together of multiple research threads all exploring the use of pretrained language models for zero-shot prediction on language-understanding tasks 3. The singling-out of the albert-xxlarge-v2 architecture as performing especially well on zero-shot common-sense reasoning tasks, achieving SOTA on the TimeDial dataset The overall message of the paper is that the existing framework of fine-tuning models for common-sense reasoning tasks is flawed and prone to overfitting, and that more robust results can be achieved using pretrained models in a zero-shot manner. In particular, doing so with the albert-xxlarge-v2 model achieves state-of-the-art performance on some benchmarks. | SP:64da35ba47ea28883a1828aa80926f57b037332f |
An Application of Pseudo-log-likelihoods to Natural Language Scoring | Language models built using semi-supervised machine learning on large corpora of natural language have very quickly enveloped the fields of natural language generation and understanding . In this paper we apply a zero-shot approach independently developed by a number of researchers now gaining recognition as a significant alternative to fine-tuning for evaluation on common sense tasks . A language model with relatively few parameters and training steps ( albert-xxlarge-v2 ) compared to a more recent language model ( T5 ) can outperform it on a recent large data set ( TimeDial ) , while displaying robustness in its performance across a similar class of language tasks . Surprisingly , this result is achieved by using a hyperparameter-free zero-shot method with the smaller model , compared to finetuning to the larger model . We argue that robustness of the smaller model ought to be understood in terms of compositionality , in a sense that we draw from recent literature on a class of similar models . We identify a practical cost for our method and model : high GPU-time for natural language evaluation . The zero-shot measurement technique that produces remarkable stability , both for ALBERT and other BERT variants , is an application of pseudo-log-likelihoods to masked language models for the relative measurement of probability for substitution alternatives in forced choice language tasks such as the Winograd Schema Challenge , Winogrande , CommonsenseQA , and others . One contribution of this paper is to bring together a number of similar , but independent strands of research . We produce some absolute state-of-the-art ( SOTA ) results for common sense reasoning in binary choice tasks , performing better than any published result in the literature , including fine-tuned efforts . In others our results are SOTA relative to published methods similar to our own – in some cases by wide margins , but below SOTA absolute for fine-tuned alternatives . In addition we show a remarkable consistency of the model ’ s performance under adversarial settings , which we argue is best explained by the model ’ s compositionality of representations . 1 INTRODUCTION . Computational linguistics has made major strides in the adoption of machine learning techniques applied to unstructured corpora consisting of human generated natural language text . For example , some methods take advantage of the frequencies of words in natural human text to productive ends ( Collobert et al. , 2011 ; Mikolov et al. , 2013a ; b ; Peters et al. , 2018 ) . N-gram models providing frequencies of pairs , triples , etc . of words in natural text provided further gains on related tasks . However , a very influential paper in 2018 , signalling a major shift in the application of machine learning to natural text , advocated for an architecture that has “ a more structured memory for handling long-term dependencies in text , compared to alternatives like recurrent networks , resulting in robust transfer performance across diverse tasks. ” ( Radford et al. , 2018 ) This culminated in the application of the Transformer ( Vaswani et al. , 2017 ) to the creation of representations through language prediction tasks , motivated by the importance of long-term dependencies in natural language text for not only the choice of model , but also training data ; as Radford et al . note , “ Crucially , [ BooksCorpus ] , a common corpus for a multitude of emerging transformer models , contains long stretches of contiguous text , which allows the generative model to learn to condition on long-range information. ” Why might ‘ long stretches of contiguous text ’ , via learning conditioned on that text , lead to success at diverse tasks like natural language inference , question answering , sentence similarity , and classification ( Radford et al. , 2018 , Table 1 ) ? After all , these tasks typically involve very short , independent sections of text . Solving the Winograd Schema Challenge ( WSC ) Levesque et al . ( 2012 ) seems to require vast amounts of common sense knowledge , and the job of learning long-term dependencies was supposed to help replacing actual knowledge of the world with the proxy knowledge that humangenerated text provides . Although language models do well at common sense benchmarks through fine-tuning , when we evaluate them using standard fine-tuning methods with a small , admittedly unreliable ‘ quick-probe ’ , they do not generalize well to new samples that we offer . On the other hand , a recent zero-shot technique using an idiosyncratic model with several unique architectural and training features shows remarkable consistency and absolute performance on our unreliable quick probe , but also on a family of challenging common sense problems . 1.1 SUMMARY OF CONTRIBUTIONS : . In this paper we investigate the properties of a language model with parameter sharing : albert-xxlarge-v2 , small in both parameter count and pre-training corpus relative to the field of language models generally . We find that pseudo-log-likelihoods ( PLL ) and token-normalied PLLs ( NormPLL ) methods for scoring natural language with this model performs at a mixture of outright state-of-the-art ( SOTA ) performance and robust , but SOTA performance just for zero-shot methods at a series of recent binary common sense language tasks . The combination of model and method is remarkable consistent , scoring around 75-80 % under conditions both designed to be adversarial against language models . The approach is also robust against accidental processes that reduce zero-shot performance in language models generally , such as semantically and syntactically noisy data . To our knowledge , our results are SOTA for any approach to the TimeDial ( Qin et al. , 2021 ) dataset ; SOTA for any zero-shot approach to solving the train-xl split of Winogrande ( Sakaguchi et al. , 2020 ) ; SOTA for an average score on the perturbed Winograd set ( Abdou et al. , 2020 ) ; and , SOTA for any zero-shot approach to WSC , with the exception of a reported result in which training and testing sets were mixed . In other cases , our approach is SOTA for zero-shot and competitive with fine-tuned approaches . We provide an explanation for the results and their significance . 2 RELATED WORK . 2.1 BIDIRECTIONAL VS. UNIDIRECTIONAL MODELS . The two most recent GPT papers , ‘ Language Models are Unsupervised Multitask Learners ’ ( Radford et al. , 2019 ) and ‘ Language models are few-shot learners ’ ( Brown et al. , 2020 ) identify in their titles the nature or purpose of machine learning models for language with the purposes they put their GPT variants to in their paper . Emphatic titles aside , the most influential fine-tuning papers also advocate for few- and zero-shot results . A more important differentiator between GPT style and NormPLL-suitable models is the significant benefit of a bidirectional masked objective for success with PLL scoring methods over single-directional masked objectives , as Salazar et al . ( 2020 ) , Zhou et al . ( 2020 ) , and Ma et al . ( 2021 ) show . 2.2 THE ‘ QUICK-PROBE ASSUMPTION ’ . In his discussion of Winograd schemas , Dennett defines what he calls the ‘ quick-probe assumption ’ : success on a few Winograd schemas in a Turing test-style evaluation ought to indicate generalizability of a computer ’ s ability to make common sense judgements , not merely success at the few examples like it , or examples like it in some superficial way only ( Dennett , 1984 ) . One of us , skeptical of fine-tuning for success at tasks like the Winograd Schema Challenge and similar problems , hand-made a set of 20 sentence pairs1 prior to collaboration on the present paper . 1https : //anonymous.4open.science/r/NotSoFineTuning-4620/ winogradversarial/examples.json We have reproduced the dataset in its entirety in Appendix A.2 . The purpose of this set of Winograd-style pairs is to test whether fine-tuning can be attacked directly , as follows . Suppose a training set contains multiple complete pairs , such that reference is shifted every time a sentence has a twin that is different only in some modifier or short phrase . Then perhaps a pair in which reference isn ’ t shifted will be scored poorly , if the model is spuriously using the modifier trick . This can be an exploited trick ( at least in principle ) if , for example , one member of a Winograd schema pair is in the train set , and the other is in the test set 2 . Here is an example from this small , hand-made data set : 1 . This is why people are supposed to take salt tablets when < mask > sweat a lot . Answers : people , salt tablets 2 . This is why people are supposed to take salt tablets when < mask > sweat a little . Answers : people , salt tablets By substituting the answers in for the mask above we get two pairs of sentences for a model to score , or assess the relative likelihood of , resulting in two questions of the suitcase/trophy example above . The correct answer above for both examples is ‘ people ’ , since salt tablets don ’ t sweat . In Table 1 we compare the performance of a variety of models that have been fine-tuned on the Winogrande , a scaled WSC-variant debiased against RoBERTa ( Sakaguchi et al. , 2020 ) . We find that the BERT family of language models generally does poorly on this data set when evaluating its fine-tuned discriminator on the data set . On the other hand , using a method of scoring sentences using language models in a manner which is free of hyperparameters , we also score the models – in the second column , there is no training beyond the objective functions of the models during semi-supervised pre-training . Notice that a single model outperforms the others : albert-large . The albert-xxlarge-v2 variant scores an impressive 80 % on the Winogradversarial dataset we present . It is a well-defined question to ask whether this high value for that last variant is a statistical fluke , or evidence of a robust ability to score binary common sense sentence pairs at a rate of around 80 % . An anonymous reviewer points out that , on 20 coin flips , there is a greater than 25 % chance of achieving more than 12 heads , or 60 % accuracy . Therefore the results of Table 1 are not particularly meaningful . We agree : these results are not particularly meaningful , by themselves . This paper argues on the basis of new results on very large binary choice and similar data sets that the 80 % score achieved by the albert-xxlarge-v2 is due to its compositionality of representation and corresponding systematicity of behaviour . We also cite independent research that supports our interpretation of our results . 2.3 HYPERPARAMETERS AND ZERO-SHOT . A broad survey of machine learning research concludes that demonstrating an ability to innovate in model construction dominates work done in data set collection and cleaning ( Sambasivan et al. , 2021 ) . Resource constrained researchers are able to use platforms like huggingface to leverage pretraining with novel models contributed by other researchers . PLLs applied to language models for common sense tasks present both opportunities and challenges distinct from this standard approach 2This is in fact turns out to be the case in the WNLI dataset which is part of the general natural understaning benchmark of SuperGLUE Wang et al . ( 2019 ) to distributed work in NLP . Predictive language model scoring using a pre-trained deep learning model has been used since at least ( Linzen et al. , 2016 ) , although as we discuss below PLLs seem to display unique benefits for architectures with idiosyncratic features such as parameter sharing and bidrectionality . Despite its nascent application , scholarly literature has already recognized availability of GPU time for researchers as a limiting factor in applying NormPLLs ( our term ) for large language models . Laban et al . ( 2021 ) explicitly limit their investigation to the smallest , ‘ base ’ models of BERT and RoBERTa in their published results . As we demonstrate below , ALBERT requires an order of magnitude more GPU time for NormPLL scoring than BERT , but we nevertheless provide results for a number of important data sets using the ‘ xxlarge ’ variant of ALBERT . In Appendix C we compare the approach we share here to a wide range of common sense tasks , including COPA ( Roemmele et al. , 2011 ) . The website associated with the COPA dataset contains an ethics injunction for its users with specific imperatives : researchers should not peek at the data set before they evaluating their model on it ; and , researchers should only evaluate their method on COPA once.3 Our zero-shot method scores an impressive 80 % on COPA ; as we argue below , sensitivity of the method to even extra spaces around punctuation marks necessitates a certain amount of familiarity with the data . We see critical views of fine-tuning in industry . A recent white paper eschews fine-tuning and even few-shot evaluation for assessing the representational quality of a natural language model because of their potential for spurious results.4 Research parallelism can produce spurious results simply as a consequence of the number of hypotheses tested.5 It is beyond the scope of this paper , but there are number of methods , such as Bonferroni correction , that can be used in settings of multiple hypothesis testing . Regardless of one ’ s priors for the compliance of fine-tuning of language models by the research community with statistical best practices , one may find zero-shot measurements of language models more reliable simply because of the fewer points of possible p-hacking , intentional or otherwise . | I believe that the main contribution is that the paper shows that albert-xxlarge-v2 is the best zero-shot model on the commonsense datasets (out of multiple models available in huggingface that are being evaluated). However, the authors also seem to argue against finetuning as a general approach for commonsense reasoning, though I don't see how this is backed up by the paper (except for maybe the small experiment with 20 sentence pairs?). | SP:64da35ba47ea28883a1828aa80926f57b037332f |
Contrastive Representation Learning for 3D Protein Structures | 1 INTRODUCTION . In recent years , learning on 3D protein structures has gained a lot of attention in the fields of protein modeling and structural bioinformatics . These neural network architectures process the 3D position of the atoms and/or amino acids in 3D space in order to make predictions of unprecedented performance , in tasks ranging from protein design ( Ingraham et al. , 2019 ; Strokach et al. , 2020 ; Jing et al. , 2021 ) , over protein structure classification ( Hermosilla et al. , 2021 ) , protein quality assessment ( Baldassarre et al. , 2020 ; Derevyanko et al. , 2018 ) , and protein function prediction ( Amidi et al. , 2017 ; Gligorijevic et al. , 2021 ) – just to name a few . Unfortunately , learning on the structure of proteins leads to a reduced amount of training data , as compared for example to sequence learning , since 3D structures are harder to obtain and thus less prevalent . While the Protein Data Bank ( PDB ) ( Berman et al. , 2000 ) today contains only around 182K macromolecular structures , the Pfam database ( Mistry et al. , 2020 ) contains 47M protein sequences . Naturally , the number of available structures decreases even further when only the structures labeled with a specific property are considered . We refer to these as annotated protein structures . The SIFTS database , for example , contains around 220K annotated enzymes from 96K different PDB entries , and the SCOPe database contains 226 K annotated structures . These numbers are orders of magnitude lower than the data set sizes which led to the major breakthroughs in the field of deep learning . ImageNet ( Russakovsky et al. , 2015 ) , for instance , contains more than 10M annotated images . As learning on 3D protein structures can not benefit from these large amounts of data , model sizes are limited or overfitting might occur . In order , to take advantage of unlabeled data , researchers have , over the years , designed different algorithms , that are able to learn meaningful representations from such data without labels ( Hadsell et al. , 2006 ; Ye et al. , 2019 ; Chen et al. , 2020a ) . In natural language processing , next token prediction or random token masking are commonly used unsupervised training objectives , that are able to learn meaningful word representations useful for different downstream tasks ( Peters et al. , 2018 ; Devlin et al. , 2019 ) . Recently , such algorithms have been used to learn meaningful protein representations from unlabeled sequences ( Alley et al. , 2019 ) , or as a pre-trained method for later fine-tuning models on different downstream tasks ( Rao et al. , 2019 ) . In computer vision recently , contrastive learning has shown great performance on image classification when used to pre-train deep convolutional neural network ( CNN ) architectures ( Chen et al. , 2020a ; b ) . This pre-training objective has also been used in the context of protein sequence representation learning by dividing sequences in amino acid ’ patches ’ ( Lu et al. , 2020b ) , or by using data augmentation techniques based on protein evolutionary information ( Lu et al. , 2020a ) . Most recently , the contrastive learning framework has been applied to graph convolutional neural networks ( You et al. , 2020 ) . These techniques were tested on protein spatial neighboring graphs ( graphs where edges connect neighbor amino acids in 3D space ) for the binary task of classifying a protein as enzyme or not . However , these algorithms were designed for arbitrary graphs and did not take into account the underlying structure of proteins . In this work , we introduce a contrastive learning framework for representation learning of 3D protein structures . For each unlabeled protein chain , we select random molecular sub-structures during training . We then minimize the cosine distance between the learned representations of the sub-structures sampled from the same protein , while maximizing the cosine distance between representations from different protein chains . This training objective enables us , to pre-train models on all available annotated , but more importantly also unlabeled , protein structures from the PDB . The obtained representation can later be used as a weight initialization strategy to improve performance on different downstream tasks , such as protein structure classification and function prediction . Moreover , we show how the learned protein representation is able to capture protein structural similarity and functionality , by embedding proteins from the same fold or with similar functions close together in this space . The remainder of this paper is structured as follows . First , we provide a summary of the state-of-theart in Section 2 . Then , we introduce our framework in Section 3.Later , in Section 4 , we describe the experiments conducted to evaluate our framework and the representations learned , and lastly , we provide a summary of our findings and possible lines of future research in Section 5 . 2 RELATED WORK . 3D protein structure learning . Early work on learning from 3D protein structures used graph kernels and support vector machines to classify enzymes ( Borgwardt et al. , 2005 ) . Later , the advances in the fields of machine learning and computer vision brought a new set of techniques to the field . Several authors represent the protein tertiary structure as a 3D density map , and process it with a 3D convolutional neural network ( 3DCNN ) . Among the problems addressed with this technique , are protein quality assessment ( Derevyanko et al. , 2018 ) , protein enzyme classification ( Amidi et al. , 2017 ) , protein-ligand binding affinity ( Ragoza et al. , 2017 ) , protein binding site prediction ( Jiménez et al. , 2017 ) and protein-protein interaction interface prediction ( Townshend et al. , 2019 ) . Other authors have used graph convolutional neural networks ( GCNN ) to learn directly from the protein spatial neighboring graph . Some of the tasks solved with these techniques , are protein interface prediction ( Fout et al. , 2017 ) , function prediction ( Gligorijevic et al. , 2021 ) , protein quality assessment ( Baldassarre et al. , 2020 ) , and protein design ( Strokach et al. , 2020 ) . Recently , several neural network architectures , specifically designed for protein structures , have been proposed to tackle protein design challenges ( Ingraham et al. , 2019 ; Jing et al. , 2021 ) , or protein fold and function prediction ( Hermosilla et al. , 2021 ) . Protein representation learning . Protein representation learning based on protein sequences is an active area of research . Early works used similar techniques as the ones used in natural language processing to compute embeddings of groups of neighboring amino acids in a sequence ( Asgari & Mofrad , 2015 ) . Recently , other works have used unsupervised learning algorithms from natural language processing such as token masking or next token prediction ( Peters et al. , 2018 ; Devlin et al. , 2019 ) to learn representations from protein sequences ( Alley et al. , 2019 ; Rao et al. , 2019 ; Min et al. , 2020 ; Strodthoff et al. , 2020 ) . This year , Lu et al . ( 2020b ; a ) have suggested using contrastive learning on protein sequences , to obtain a meaningful protein representation . Despite the advances in representation learning for protein sequences , representation learning for 3D protein structures mostly has relied on hand-crafted features . La et al . ( 2009 ) proposed a method to compute a vector of 3D Zernike descriptors to represent protein surfaces , which later can be used for shape retrieval . Recently , Guzenko et al . ( 2020 ) used a similar approach , to compute a vector of 3D Zernike descriptors directly from the 3D density volume , which can be used later for protein shape comparison . The annual shape retrieval contest ( SHREC ) usually contains a protein shape retrieval track , in which methods are required to determine protein similarity from different protein surfaces ( Langenfeld et al. , 2019 ; 2020 ) . Some of the works presented here , make use of 3DCNNs or GCNNs to achieve this goal . However , they operate on protein surfaces , and are either trained in a supervised fashion on the binary shape similarity task , or pre-trained on a classification task . Contrastive learning . In 1992 , Becker & Hinton ( 1992 ) suggested training neural networks through the agreement between representations of the same image under different transformations . Later , Hadsell et al . ( 2006 ) proposed to learn image representations by minimizing the distance between positive pairs and maximizing the distance between negative pairs ( see Figure 1 ) . This idea was used in other works by sampling negative pairs from the mini-batches used during training ( Ji et al. , 2019 ; Ye et al. , 2019 ) . Recently , Chen et al . ( 2020a ; b ) have shown how these methods can improve image classification performance . You et al . ( 2020 ) have transferred these ideas to graphs , by proposing four different data transformations to be used during training : node dropping , edge perturbation , attribute masking , and subgraph sampling . These ideas were tested on the commonly used graph benchmark PROTEINS ( Borgwardt et al. , 2005 ) , composed of only 1 , 113 proteins . However , since this data set is composed of spatial neighboring graphs of secondary structures , the proposed data augmentation techniques can generate graphs of unconnected chain sections . In this paper instead , we suggest using a domain-specific transformation strategy , that preserves the local information of protein sequences . 3 3D PROTEIN CONTRASTIVE LEARNING . 3.1 PROTEIN GRAPH . In this work , the protein chain is defined as a graph G = ( N , R , F , A , B ) , where each node represents the alpha carbon of an amino acid with its 3D coordinates , N ∈ Rn×3 , being n the number of amino acids in the protein . Moreover , for each node , we store a local frame composed of three orthonormal vectors describing the orientation of the amino acid wrt . the protein backbone , R ∈ Rn×3×3 . Lastly , each node has also t different associated features with it , F ∈ Rn×t . The connectivity of the graph is stored in two different adjacency matrices , A ∈ Rn×n and B ∈ Rn×n . Matrix A is defined as Aij = 1 if amino acids i and j are connected by a peptide bond and Aij = 0 otherwise . Matrix B is defined as Bij = 1 if amino acids i and j are at a distance smaller than d in 3D space and Bij = 0 otherwise . 3.2 CONTRASTIVE LEARNING FRAMEWORK Inspired by recent works in the computer vision domain ( Ye et al. , 2019 ; Ji et al. , 2019 ; Chen et al. , 2020a ) , our framework is trained by maximizing the similarity between representations from sub-structures of the same protein , and minimizing the similarity between sub-structures from different proteins . More formally , given a protein graph G , we sample two sub-structures Gi and Gj from it . We then compute the latent representations of these substructures , hi and hj , using a protein graph encoder , hi = E ( Gi ) . Based on the findings of Chen et al . ( 2020a ) , we further project these latent representations into smaller latent representation , zi and zj , using a multilayer perceptron ( MLP ) with a single hidden layer , zi = P ( hi ) . Lastly , the similarity between these representa- tions is computed using the cosine distance , s ( zi , zj ) . In order to minimize the similarity between these representations , we use the following loss function for the sub-structure Gi : li = −log exp ( s ( zi , zj ) /τ ) ∑2N k=1 1 [ k 6=i , k 6=j ] exp ( s ( zi , zk ) /τ ) ( 1 ) where τ is a temperature parameter used to improve learning from ’ hard ’ examples , 1 [ k 6=i , k 6=j ] ∈ [ 0 , 1 ] is a function that evaluates to 1 if k 6= i and k 6= j , and N is the number of protein structures in the current mini-batch . To compute lj we use again Equation 1 , but exchange the role of i and j . This loss has been used before in the context of representation learning ( Chen et al. , 2020a ; van den Oord et al. , 2018 ) , and as in previous work , our framework does not explicitly sample negative examples but uses instead the rest of sub-structures sampled from different proteins in the mini-batch as negative examples . In the following subsections , we will describe the different components specific to our framework designed to process protein structures . | This paper studies unsupervised contrastive learning on protein structures, using sub-structure sampling as the data transformation strategy in contrastive learning. The protein structure representation from contrastive learning is then evaluated for three tasks: fold classification, enzyme classification, and protein similarity. The main contributions are: 1) Demonstrating that contrastive learning with sub-structure sampling is a viable strategy for learning protein structure representations. 2) Improving empirical results for fold classification and enzyme classification when compared to existing methods. | SP:5f8c45f7d133f3ec6e0280fb596a3242e2f48733 |
Contrastive Representation Learning for 3D Protein Structures | 1 INTRODUCTION . In recent years , learning on 3D protein structures has gained a lot of attention in the fields of protein modeling and structural bioinformatics . These neural network architectures process the 3D position of the atoms and/or amino acids in 3D space in order to make predictions of unprecedented performance , in tasks ranging from protein design ( Ingraham et al. , 2019 ; Strokach et al. , 2020 ; Jing et al. , 2021 ) , over protein structure classification ( Hermosilla et al. , 2021 ) , protein quality assessment ( Baldassarre et al. , 2020 ; Derevyanko et al. , 2018 ) , and protein function prediction ( Amidi et al. , 2017 ; Gligorijevic et al. , 2021 ) – just to name a few . Unfortunately , learning on the structure of proteins leads to a reduced amount of training data , as compared for example to sequence learning , since 3D structures are harder to obtain and thus less prevalent . While the Protein Data Bank ( PDB ) ( Berman et al. , 2000 ) today contains only around 182K macromolecular structures , the Pfam database ( Mistry et al. , 2020 ) contains 47M protein sequences . Naturally , the number of available structures decreases even further when only the structures labeled with a specific property are considered . We refer to these as annotated protein structures . The SIFTS database , for example , contains around 220K annotated enzymes from 96K different PDB entries , and the SCOPe database contains 226 K annotated structures . These numbers are orders of magnitude lower than the data set sizes which led to the major breakthroughs in the field of deep learning . ImageNet ( Russakovsky et al. , 2015 ) , for instance , contains more than 10M annotated images . As learning on 3D protein structures can not benefit from these large amounts of data , model sizes are limited or overfitting might occur . In order , to take advantage of unlabeled data , researchers have , over the years , designed different algorithms , that are able to learn meaningful representations from such data without labels ( Hadsell et al. , 2006 ; Ye et al. , 2019 ; Chen et al. , 2020a ) . In natural language processing , next token prediction or random token masking are commonly used unsupervised training objectives , that are able to learn meaningful word representations useful for different downstream tasks ( Peters et al. , 2018 ; Devlin et al. , 2019 ) . Recently , such algorithms have been used to learn meaningful protein representations from unlabeled sequences ( Alley et al. , 2019 ) , or as a pre-trained method for later fine-tuning models on different downstream tasks ( Rao et al. , 2019 ) . In computer vision recently , contrastive learning has shown great performance on image classification when used to pre-train deep convolutional neural network ( CNN ) architectures ( Chen et al. , 2020a ; b ) . This pre-training objective has also been used in the context of protein sequence representation learning by dividing sequences in amino acid ’ patches ’ ( Lu et al. , 2020b ) , or by using data augmentation techniques based on protein evolutionary information ( Lu et al. , 2020a ) . Most recently , the contrastive learning framework has been applied to graph convolutional neural networks ( You et al. , 2020 ) . These techniques were tested on protein spatial neighboring graphs ( graphs where edges connect neighbor amino acids in 3D space ) for the binary task of classifying a protein as enzyme or not . However , these algorithms were designed for arbitrary graphs and did not take into account the underlying structure of proteins . In this work , we introduce a contrastive learning framework for representation learning of 3D protein structures . For each unlabeled protein chain , we select random molecular sub-structures during training . We then minimize the cosine distance between the learned representations of the sub-structures sampled from the same protein , while maximizing the cosine distance between representations from different protein chains . This training objective enables us , to pre-train models on all available annotated , but more importantly also unlabeled , protein structures from the PDB . The obtained representation can later be used as a weight initialization strategy to improve performance on different downstream tasks , such as protein structure classification and function prediction . Moreover , we show how the learned protein representation is able to capture protein structural similarity and functionality , by embedding proteins from the same fold or with similar functions close together in this space . The remainder of this paper is structured as follows . First , we provide a summary of the state-of-theart in Section 2 . Then , we introduce our framework in Section 3.Later , in Section 4 , we describe the experiments conducted to evaluate our framework and the representations learned , and lastly , we provide a summary of our findings and possible lines of future research in Section 5 . 2 RELATED WORK . 3D protein structure learning . Early work on learning from 3D protein structures used graph kernels and support vector machines to classify enzymes ( Borgwardt et al. , 2005 ) . Later , the advances in the fields of machine learning and computer vision brought a new set of techniques to the field . Several authors represent the protein tertiary structure as a 3D density map , and process it with a 3D convolutional neural network ( 3DCNN ) . Among the problems addressed with this technique , are protein quality assessment ( Derevyanko et al. , 2018 ) , protein enzyme classification ( Amidi et al. , 2017 ) , protein-ligand binding affinity ( Ragoza et al. , 2017 ) , protein binding site prediction ( Jiménez et al. , 2017 ) and protein-protein interaction interface prediction ( Townshend et al. , 2019 ) . Other authors have used graph convolutional neural networks ( GCNN ) to learn directly from the protein spatial neighboring graph . Some of the tasks solved with these techniques , are protein interface prediction ( Fout et al. , 2017 ) , function prediction ( Gligorijevic et al. , 2021 ) , protein quality assessment ( Baldassarre et al. , 2020 ) , and protein design ( Strokach et al. , 2020 ) . Recently , several neural network architectures , specifically designed for protein structures , have been proposed to tackle protein design challenges ( Ingraham et al. , 2019 ; Jing et al. , 2021 ) , or protein fold and function prediction ( Hermosilla et al. , 2021 ) . Protein representation learning . Protein representation learning based on protein sequences is an active area of research . Early works used similar techniques as the ones used in natural language processing to compute embeddings of groups of neighboring amino acids in a sequence ( Asgari & Mofrad , 2015 ) . Recently , other works have used unsupervised learning algorithms from natural language processing such as token masking or next token prediction ( Peters et al. , 2018 ; Devlin et al. , 2019 ) to learn representations from protein sequences ( Alley et al. , 2019 ; Rao et al. , 2019 ; Min et al. , 2020 ; Strodthoff et al. , 2020 ) . This year , Lu et al . ( 2020b ; a ) have suggested using contrastive learning on protein sequences , to obtain a meaningful protein representation . Despite the advances in representation learning for protein sequences , representation learning for 3D protein structures mostly has relied on hand-crafted features . La et al . ( 2009 ) proposed a method to compute a vector of 3D Zernike descriptors to represent protein surfaces , which later can be used for shape retrieval . Recently , Guzenko et al . ( 2020 ) used a similar approach , to compute a vector of 3D Zernike descriptors directly from the 3D density volume , which can be used later for protein shape comparison . The annual shape retrieval contest ( SHREC ) usually contains a protein shape retrieval track , in which methods are required to determine protein similarity from different protein surfaces ( Langenfeld et al. , 2019 ; 2020 ) . Some of the works presented here , make use of 3DCNNs or GCNNs to achieve this goal . However , they operate on protein surfaces , and are either trained in a supervised fashion on the binary shape similarity task , or pre-trained on a classification task . Contrastive learning . In 1992 , Becker & Hinton ( 1992 ) suggested training neural networks through the agreement between representations of the same image under different transformations . Later , Hadsell et al . ( 2006 ) proposed to learn image representations by minimizing the distance between positive pairs and maximizing the distance between negative pairs ( see Figure 1 ) . This idea was used in other works by sampling negative pairs from the mini-batches used during training ( Ji et al. , 2019 ; Ye et al. , 2019 ) . Recently , Chen et al . ( 2020a ; b ) have shown how these methods can improve image classification performance . You et al . ( 2020 ) have transferred these ideas to graphs , by proposing four different data transformations to be used during training : node dropping , edge perturbation , attribute masking , and subgraph sampling . These ideas were tested on the commonly used graph benchmark PROTEINS ( Borgwardt et al. , 2005 ) , composed of only 1 , 113 proteins . However , since this data set is composed of spatial neighboring graphs of secondary structures , the proposed data augmentation techniques can generate graphs of unconnected chain sections . In this paper instead , we suggest using a domain-specific transformation strategy , that preserves the local information of protein sequences . 3 3D PROTEIN CONTRASTIVE LEARNING . 3.1 PROTEIN GRAPH . In this work , the protein chain is defined as a graph G = ( N , R , F , A , B ) , where each node represents the alpha carbon of an amino acid with its 3D coordinates , N ∈ Rn×3 , being n the number of amino acids in the protein . Moreover , for each node , we store a local frame composed of three orthonormal vectors describing the orientation of the amino acid wrt . the protein backbone , R ∈ Rn×3×3 . Lastly , each node has also t different associated features with it , F ∈ Rn×t . The connectivity of the graph is stored in two different adjacency matrices , A ∈ Rn×n and B ∈ Rn×n . Matrix A is defined as Aij = 1 if amino acids i and j are connected by a peptide bond and Aij = 0 otherwise . Matrix B is defined as Bij = 1 if amino acids i and j are at a distance smaller than d in 3D space and Bij = 0 otherwise . 3.2 CONTRASTIVE LEARNING FRAMEWORK Inspired by recent works in the computer vision domain ( Ye et al. , 2019 ; Ji et al. , 2019 ; Chen et al. , 2020a ) , our framework is trained by maximizing the similarity between representations from sub-structures of the same protein , and minimizing the similarity between sub-structures from different proteins . More formally , given a protein graph G , we sample two sub-structures Gi and Gj from it . We then compute the latent representations of these substructures , hi and hj , using a protein graph encoder , hi = E ( Gi ) . Based on the findings of Chen et al . ( 2020a ) , we further project these latent representations into smaller latent representation , zi and zj , using a multilayer perceptron ( MLP ) with a single hidden layer , zi = P ( hi ) . Lastly , the similarity between these representa- tions is computed using the cosine distance , s ( zi , zj ) . In order to minimize the similarity between these representations , we use the following loss function for the sub-structure Gi : li = −log exp ( s ( zi , zj ) /τ ) ∑2N k=1 1 [ k 6=i , k 6=j ] exp ( s ( zi , zk ) /τ ) ( 1 ) where τ is a temperature parameter used to improve learning from ’ hard ’ examples , 1 [ k 6=i , k 6=j ] ∈ [ 0 , 1 ] is a function that evaluates to 1 if k 6= i and k 6= j , and N is the number of protein structures in the current mini-batch . To compute lj we use again Equation 1 , but exchange the role of i and j . This loss has been used before in the context of representation learning ( Chen et al. , 2020a ; van den Oord et al. , 2018 ) , and as in previous work , our framework does not explicitly sample negative examples but uses instead the rest of sub-structures sampled from different proteins in the mini-batch as negative examples . In the following subsections , we will describe the different components specific to our framework designed to process protein structures . | This paper presents an unsupervised deep learning approach for learning representations of 3D protein structures. They use an objective function motivated by the recent contrastive learning approaches (from computer vision). They show the utility of their model in two downstream applications: protein fold classification, enzyme classification and protein similarity prediction. The main contributions are: 1. they show how the contrastive learning framework can be used in the context of protein structures 2. they analyze the learned representations 3. they show the utility of the model to downstream applications 4. they show how fine-tuning their model leads to an improved performance | SP:5f8c45f7d133f3ec6e0280fb596a3242e2f48733 |
Kokoyi: Executable LaTeX for End-to-end Deep Learning | 1 INTRODUCTION . The success of deep learning is a tale of two ends . At one end is model development , which leverages the language of mathematics to define models . At the other end is model implementation , which relies on programming languages such as Python and CUDA to unleash the power of big data and efficient computation . Despite the success , the language gap between the two ends is arguably the source of many evils , such as slow prototyping , wrong/inefficient/unreadable implementations , irreproducible results in publications , and learning barriers to name but a few . A model is often implemented and re-implemented in different , incompatible deep learning frameworks , an unnecessary fragmentation from a model point of view . The mission of KOKOYI is to close the language gap between the two ends , making math-in-codes and math-in-model consistent . KOKOYI advocates a simple principle : your model is your code . KOKOYI introduces a programming language called kokoyi-lang with a programming model elevated to mathematical equations written in LATEX , a rendering language popular among deep learning researchers ( Lamport , 1994 ) . Syntactically , kokoyi-lang is a LATEX subset that is prevalent in mathematical definitions of deep learning models , making kokoyi-lang programs renderable just like LATEX documents . Semantically , kokoyi-lang formalizes the consensus of the semantics of mathematics among deep learning researchers and practitioners . KOKOYI aims at boosting productivity at a community level : researchers code a model once instead of twice ( one in a program and another in the publication ) , kokoyi-lang is designed to be framework agnostic , the model written in kokoyi-lang in a paper is executable by all and can be ported to different frameworks . As a teaser , the following ( rendered ) line of kokoyi-lang code defines the mean squared error loss function ` for a linear regression model with weight w ∈ Rd and bias b ∈ R given a dataset D : ` ( w , b , D ) ← 1 |D| ∑ ( x , y ) ∈D ( ( w · x+ b ) − y ) 2 Given kokoyi-lang programs , KOKOYI can generate executables using various backends . In its current prototype , KOKOYI generates PyTorch code because of its widespread adoption . However , compared to PyTorch ’ s deep learning frameworks that rely on Python for usability , KOKOYI can potentially avoid Python ’ s overheads without degrading user experience by translating kokoyi-lang programs into more efficient deep learning IRs such as Relay ( Roesch et al. , 2018 ) instead of Python . The level of abstraction offered by the language of mathematics can enable various code optimizations deemed hard for existing deep learning frameworks . An example is auto-batching , which can relieve users from the burden of manually vectorizing per-sample code , an often counter-intuitive endeavor mandated solely by efficiency . Previous attempts of auto-batching like TensorFlow-Fold ( Looks et al. , 2017 ) , DyNet ( Neubig et al. , 2017 ) and JAX ( Bradbury et al. , 2018 ) heavily rely on Justin-time compilation , which are limited to statically shaped samples . By contrast , KOKOYI takes a holistic design , constructs built during compilation facilitate auto-batching during execution . As a result , KOKOYI can auto-batch complex models , such as the auto-regressive attention module in Transformer ( Vaswani et al. , 2017 ) . KOKOYI focuses on model specification . Our informal survey of sampled papers from several machine learning conferences , showing KOKOYI can impact about 15 % to 66 % of the work where model development is the focus ( Appendix A ) . As such , we do not attempt to reinvent all the wheels , compiled kokoyi-lang programs are callable Python objects and integrate with other stages ( e.g . data preparation , model tuning , and model serving ) of the deep learning pipeline ; whenever possible , KOKOYI reuses efficient existing modules . We implemented a Jupyter Notebook frontend to facilitate interactive rendering and execution of kokoyi-lang programs ( ( see HTML examples in the supplementary materials ) . There is also an interactive Web server where one can prototype a model and download both the PyTorch module and the LATEX snippet . We envision supporting compilation of kokoyi-lang code embedded in LATEX documents in the future to enable executable research papers . We implemented in kokoyi-lang several important deep learning models with various complexity , and find most of our per-sample implementations , with PyTorch as backend , achieve performance comparable to manually tensorized PyTorch implementations . At the same time , models implemented in kokoyi-lang are substantially more succinct , for instance the standard Transformer ( Vaswani et al. , 2017 ) needs only 65 lines of code , saving users from the tedious and often error-prone process of manually batching computation ( Appendix B ) . Although a large-scale user experience study is yet to be conducted , initial feedbacks from a group of deep learning researchers and data scientists indicate that compared to existing deep learning frameworks , kokoyi-lang is much closer to their mathematical formulations of deep learning models . KOKOYI will open source soon . 2 KOKOYI OVERVIEW . KOKOYI aims to be pragmatic , while adhering to the “ your model is your code ” principle . It is built around and leverages the existing Python ecosystem , yet still reserves the flexibility to revert to more efficient execution when possible . Fig . 1 ( a ) depicts the work flow when using KOKOYI . Development starts in an IDE and we currently choose the Jupyter Notebook , a popular interactive environment for live coding and data rendering . Users can write a KOKOYI program – one or more mathematical equations – in a Jupyter Notebook cell , led by the % kokoyi marker to register a hook function that renders the equation via MathJax . Programming in KOKOYI uses kokoyi-lang , a LATEX dialect that is extended with syntax familiar to DL researchers . Running the cell triggers the compilation of the equation to generate a callable python object . In this example ( Fig . 1 ( b ) ) , the returned object represents the module definition of an L-layer RNN to compute the representation of a sentence S. We hook the program execution with a proper error handler to highlight the portion of the equations that causes the error . KOKOYI translates tensors and functions defined in kokoyi-lang into tensors in underlying DL frameworks and Python functions , and makes them accessible in a Python dictionary ( named kokoyi.symbols in Jupyter notebooks ) . For example , the RNN PyTorch module calls kokoyi.symbols [ `` RNN '' ] ( Fig . 1 ( c ) ) in its forward function . KOKOYI also ports many PyTorch ’ s commonly used modules ( e.g . \ReLu ) ; the porting is mostly straightforward , with the only complexities arise from supporting auto-batching . Our current prototype enables one-click auto-generation of a Pytorch module ( Fig . 1 ( c ) ) once it is specified in kokoyi-lang . This capacity is simplistic for now and the user is required to complete the rest of the initialization ( e.g . selecting the initializer ) . It is relatively straightforward to generate more initialization code in the future . The KOKOYI compiler generates PyTorch code directly and thus the execution is just like any other PyTorch programs . Note that KOKOYI can in principle completely by-pass Python by translating the program into low-level IRs such as Relay IR ( Roesch et al. , 2018 ) and LLVM IR ( Lattner & Adve , 2004 ) to enable efficient execution . Nevertheless , we choose to focus on Python code generation mainly due to its dominating ecosystem within the deep learning community . For example , a user can easily replace the model definition in Python with KOKOYI-generated codes , while still utilizing the rich code base from the Python community for data preparation , model tuning , serving and visualization , as all the examples in the supplementary do . KOKOYI itself leverages compilers specialized for Python deep learning program such as TorchScript ( PyTorch , 2021 ) to further accelerate the generated code . The design of KOKOYI compiler is covered in Section 4 . KOKOYI performs auto-batching , where the model is written out entirely from a single sample point of view , but processes batches of them after compilation ( see Section 5 ) . 3 PROGRAMMING IN KOKOYI . Our goal is to design KOKOYI in such a way that it requires almost no learning for researchers who have written papers . In a sense , kokoyi-lang is a dialect of LATEX . KOKOYI can typeset mathematics written in kokoyi-lang as if they were written in LATEX . All mathematics in this section are typeset , parsed , and compiled by KOKOYI , thereby removing the gap between math-inpaper and math-in-code . kokoyi-lang is also more than a LATEX dialect . Mathematics written in kokoyi-lang are executable programs carrying the semantics that most members of the DL community will reasonably expect when reading them . The immediate challenge we face is that math formulations are immensely flexible . Repurposing LATEX as the programming interface must therefore allow practical and efficient code generation without losing expressiveness . In contrast to programming languages that require alphanumeric variable names , kokoyi-lang allows programmers to denote variables by math symbols typeset in LATEX . The “ real estate ” of the sub- and superscription needs particular attention . After several iterations , we decide to leave them as part of variable name and impose reasonable constraints elsewhere . For instance , we choose f ( x ; θ ) instead of fθ ( x ) for learnable functions ( i.e . modules ) , and x [ i ] ( typeset as x [ i ] ) for indexing . We also introduce common Python syntax where there is no ambiguity such as raising a power and various forms of multiplication , see supplementary for concrete examples . We now proceed to introduce kokoyi-lang ’ s key constructs : tensors , functions and modules . Tensor definition A key construct in kokoyi-lang is tensor definition , which enables one to define ( ← ) a tensor as the value of an expression , with the command \gets : [ tensor ] ← [ expression ] Expressions in tensor definitions can be applications of built-in tensor operations , such as transpose ( > ) , element-wise arithmetic ( including element-wise reductions such as sum ∑ and product ∏ ) , dot product ( · ) , and indexing ( · [ · ] ) . The expression in a tensor definition can also be a tensor constructor , which gives an entry-by-entry definition of a tensor . For example , one can define the M × N attention matrix in Transformer ( Vaswani et al. , 2017 ) as { softmax ( q > [ i ] · k [ j ] √ d ) } N j=0 M i=0 { softmax ( q > [ i ] · k [ j ] √ d ) } i j=0 M i=0 ( 1 ) where d is embedding dimension , and q and k are M × d and N × d matrices consisting of query and key vectors , respectively . The one on the left is full self-attention whereas the right is auto-regressive , i.e . only attentive to tokens behind the current one ; in a sequence-to-sequence model , the first is often used in encoder and the second in decoder . One can similarly define a multi-head attention matrix , by adding another index that ranges over [ 0 , H ) , where H is the number of heads ( see supplementary for the full Transformer model ) . One can also define a tensor in kokoyi-lang by giving a recurrence relation among its parts , such as rows and columns . For example , the hidden state matrix of a vanilla recurrent neural network ( RNN ) can be given by h [ 0≤t≤L ] ← { h0 t = 0 tanh ( Wh · h [ t−1 ] +Wx · x [ t ] + b ) otherwise ( 2 ) where x is a matrix with L rows , each being the embedding of an input token , and Wh , Wx , and h0 are learnable parameters . This is an example of KOKOYI module that we will describe next . To facilitate compiler optimization , programmers need to annotate indices where recurrence relations exist with a range ( e.g . h [ 0≤t≤L ] ) , which can depend on the dimensions of other tensors as in the case of tensor constructor . Also notice the use of \begin { cases } and \end { cases } ( rendered as the big left brace { ) in Eq . ( 2 ) for branching ( Fig . 1 ( b ) ) It is common for a recurrence relation to involve more than one tensor . For example , the hidden state and memory cell of a long short-term memory ( LSTM ) network ( Hochreiter & Schmidhuber , 1997 ) depends on the network ’ s forget , input , and output gates , which in turn depends on the previous hidden state and memory cell . This recursive mutual dependency can be expressed in kokoyi-lang as f [ 1≤t≤L ] ← σ ( Uf · h+ Vf · x [ t ] + bf ) i [ 1≤t≤L ] ← σ ( Ui · h+ Vi · x [ t ] + bi ) o [ 1≤t≤L ] ← σ ( Uo · h+ Vo · x [ t ] + bo ) c̃ [ 1≤t≤L ] ← σ ( Uc · h+ Vc · x [ t ] + bc ) c [ 1≤t≤L ] ← { c0 t = 0 f [ t ] ◦ c [ t−1 ] + i [ t ] ◦ c̃ [ t ] otherwise h [ 1≤t≤L ] ← { h0 t = 0 o [ t ] ◦ tanh ( c [ t ] ) otherwise Function and module The syntax for tensor definition applies to function definitions as well . The following is a definition of the sigmoid function : σ ( x ) ← 1 1 + e−x kokoyi-lang borrows the concept of module from popular DL frameworks such as PyTorch ( Paszke et al. , 2019 ) to enable encapsulation of learnable parameters . Conceptually , a module is a function parameterized by learnable parameters that are updated iteratively , for example , by gradient descent . Consistent with the DL practice , kokoyi-lang separates learnable parameters from parameters that the function can be applied to using a semicolon : f ( x ; θ ) . Similarly , if f uses a component module g that itself contains learnable parameters , it is written as f ( x ; g ) . The syntax of module definition resembles those of LATEX packages for pseudocode . Fig . 1 ( b ) demonstrates the major concepts of kokoyi-lang with a L-layer RNN , where the final hidden representation h [ L ] embeds the input sentence S that contains the indices of embedding of the tokens . Wx , Wh , b , h0 and the embedding table D are all trainable parameters . Note that |S| is a syntactic sugar to retrieve the first dimension of a multi-dimensional tensor , whereas the more general GetShape function returns the complete shape . { 0 } 1||S combines a tensor construction ( of a zero input ) and concatenation to right-shift the input . The rest of the codes are all self-explanatory . | The paper presents a LaTeX-based language and compiler called kokoyi to write math-based models and compile them to actual code (such as PyTorch). The authors present an approach to support optimizations such as auto-batching during this compilation process which significantly reduces user burden. The authors presented kokoyi implementations of several popular models such MLP, CNNs, LSTMs and transformers and showed that the kokoyi compiler does not introduce much performance drop. | SP:e5bb31f2b7a8d8c2c605b71fc625890bb4f5a388 |
Kokoyi: Executable LaTeX for End-to-end Deep Learning | 1 INTRODUCTION . The success of deep learning is a tale of two ends . At one end is model development , which leverages the language of mathematics to define models . At the other end is model implementation , which relies on programming languages such as Python and CUDA to unleash the power of big data and efficient computation . Despite the success , the language gap between the two ends is arguably the source of many evils , such as slow prototyping , wrong/inefficient/unreadable implementations , irreproducible results in publications , and learning barriers to name but a few . A model is often implemented and re-implemented in different , incompatible deep learning frameworks , an unnecessary fragmentation from a model point of view . The mission of KOKOYI is to close the language gap between the two ends , making math-in-codes and math-in-model consistent . KOKOYI advocates a simple principle : your model is your code . KOKOYI introduces a programming language called kokoyi-lang with a programming model elevated to mathematical equations written in LATEX , a rendering language popular among deep learning researchers ( Lamport , 1994 ) . Syntactically , kokoyi-lang is a LATEX subset that is prevalent in mathematical definitions of deep learning models , making kokoyi-lang programs renderable just like LATEX documents . Semantically , kokoyi-lang formalizes the consensus of the semantics of mathematics among deep learning researchers and practitioners . KOKOYI aims at boosting productivity at a community level : researchers code a model once instead of twice ( one in a program and another in the publication ) , kokoyi-lang is designed to be framework agnostic , the model written in kokoyi-lang in a paper is executable by all and can be ported to different frameworks . As a teaser , the following ( rendered ) line of kokoyi-lang code defines the mean squared error loss function ` for a linear regression model with weight w ∈ Rd and bias b ∈ R given a dataset D : ` ( w , b , D ) ← 1 |D| ∑ ( x , y ) ∈D ( ( w · x+ b ) − y ) 2 Given kokoyi-lang programs , KOKOYI can generate executables using various backends . In its current prototype , KOKOYI generates PyTorch code because of its widespread adoption . However , compared to PyTorch ’ s deep learning frameworks that rely on Python for usability , KOKOYI can potentially avoid Python ’ s overheads without degrading user experience by translating kokoyi-lang programs into more efficient deep learning IRs such as Relay ( Roesch et al. , 2018 ) instead of Python . The level of abstraction offered by the language of mathematics can enable various code optimizations deemed hard for existing deep learning frameworks . An example is auto-batching , which can relieve users from the burden of manually vectorizing per-sample code , an often counter-intuitive endeavor mandated solely by efficiency . Previous attempts of auto-batching like TensorFlow-Fold ( Looks et al. , 2017 ) , DyNet ( Neubig et al. , 2017 ) and JAX ( Bradbury et al. , 2018 ) heavily rely on Justin-time compilation , which are limited to statically shaped samples . By contrast , KOKOYI takes a holistic design , constructs built during compilation facilitate auto-batching during execution . As a result , KOKOYI can auto-batch complex models , such as the auto-regressive attention module in Transformer ( Vaswani et al. , 2017 ) . KOKOYI focuses on model specification . Our informal survey of sampled papers from several machine learning conferences , showing KOKOYI can impact about 15 % to 66 % of the work where model development is the focus ( Appendix A ) . As such , we do not attempt to reinvent all the wheels , compiled kokoyi-lang programs are callable Python objects and integrate with other stages ( e.g . data preparation , model tuning , and model serving ) of the deep learning pipeline ; whenever possible , KOKOYI reuses efficient existing modules . We implemented a Jupyter Notebook frontend to facilitate interactive rendering and execution of kokoyi-lang programs ( ( see HTML examples in the supplementary materials ) . There is also an interactive Web server where one can prototype a model and download both the PyTorch module and the LATEX snippet . We envision supporting compilation of kokoyi-lang code embedded in LATEX documents in the future to enable executable research papers . We implemented in kokoyi-lang several important deep learning models with various complexity , and find most of our per-sample implementations , with PyTorch as backend , achieve performance comparable to manually tensorized PyTorch implementations . At the same time , models implemented in kokoyi-lang are substantially more succinct , for instance the standard Transformer ( Vaswani et al. , 2017 ) needs only 65 lines of code , saving users from the tedious and often error-prone process of manually batching computation ( Appendix B ) . Although a large-scale user experience study is yet to be conducted , initial feedbacks from a group of deep learning researchers and data scientists indicate that compared to existing deep learning frameworks , kokoyi-lang is much closer to their mathematical formulations of deep learning models . KOKOYI will open source soon . 2 KOKOYI OVERVIEW . KOKOYI aims to be pragmatic , while adhering to the “ your model is your code ” principle . It is built around and leverages the existing Python ecosystem , yet still reserves the flexibility to revert to more efficient execution when possible . Fig . 1 ( a ) depicts the work flow when using KOKOYI . Development starts in an IDE and we currently choose the Jupyter Notebook , a popular interactive environment for live coding and data rendering . Users can write a KOKOYI program – one or more mathematical equations – in a Jupyter Notebook cell , led by the % kokoyi marker to register a hook function that renders the equation via MathJax . Programming in KOKOYI uses kokoyi-lang , a LATEX dialect that is extended with syntax familiar to DL researchers . Running the cell triggers the compilation of the equation to generate a callable python object . In this example ( Fig . 1 ( b ) ) , the returned object represents the module definition of an L-layer RNN to compute the representation of a sentence S. We hook the program execution with a proper error handler to highlight the portion of the equations that causes the error . KOKOYI translates tensors and functions defined in kokoyi-lang into tensors in underlying DL frameworks and Python functions , and makes them accessible in a Python dictionary ( named kokoyi.symbols in Jupyter notebooks ) . For example , the RNN PyTorch module calls kokoyi.symbols [ `` RNN '' ] ( Fig . 1 ( c ) ) in its forward function . KOKOYI also ports many PyTorch ’ s commonly used modules ( e.g . \ReLu ) ; the porting is mostly straightforward , with the only complexities arise from supporting auto-batching . Our current prototype enables one-click auto-generation of a Pytorch module ( Fig . 1 ( c ) ) once it is specified in kokoyi-lang . This capacity is simplistic for now and the user is required to complete the rest of the initialization ( e.g . selecting the initializer ) . It is relatively straightforward to generate more initialization code in the future . The KOKOYI compiler generates PyTorch code directly and thus the execution is just like any other PyTorch programs . Note that KOKOYI can in principle completely by-pass Python by translating the program into low-level IRs such as Relay IR ( Roesch et al. , 2018 ) and LLVM IR ( Lattner & Adve , 2004 ) to enable efficient execution . Nevertheless , we choose to focus on Python code generation mainly due to its dominating ecosystem within the deep learning community . For example , a user can easily replace the model definition in Python with KOKOYI-generated codes , while still utilizing the rich code base from the Python community for data preparation , model tuning , serving and visualization , as all the examples in the supplementary do . KOKOYI itself leverages compilers specialized for Python deep learning program such as TorchScript ( PyTorch , 2021 ) to further accelerate the generated code . The design of KOKOYI compiler is covered in Section 4 . KOKOYI performs auto-batching , where the model is written out entirely from a single sample point of view , but processes batches of them after compilation ( see Section 5 ) . 3 PROGRAMMING IN KOKOYI . Our goal is to design KOKOYI in such a way that it requires almost no learning for researchers who have written papers . In a sense , kokoyi-lang is a dialect of LATEX . KOKOYI can typeset mathematics written in kokoyi-lang as if they were written in LATEX . All mathematics in this section are typeset , parsed , and compiled by KOKOYI , thereby removing the gap between math-inpaper and math-in-code . kokoyi-lang is also more than a LATEX dialect . Mathematics written in kokoyi-lang are executable programs carrying the semantics that most members of the DL community will reasonably expect when reading them . The immediate challenge we face is that math formulations are immensely flexible . Repurposing LATEX as the programming interface must therefore allow practical and efficient code generation without losing expressiveness . In contrast to programming languages that require alphanumeric variable names , kokoyi-lang allows programmers to denote variables by math symbols typeset in LATEX . The “ real estate ” of the sub- and superscription needs particular attention . After several iterations , we decide to leave them as part of variable name and impose reasonable constraints elsewhere . For instance , we choose f ( x ; θ ) instead of fθ ( x ) for learnable functions ( i.e . modules ) , and x [ i ] ( typeset as x [ i ] ) for indexing . We also introduce common Python syntax where there is no ambiguity such as raising a power and various forms of multiplication , see supplementary for concrete examples . We now proceed to introduce kokoyi-lang ’ s key constructs : tensors , functions and modules . Tensor definition A key construct in kokoyi-lang is tensor definition , which enables one to define ( ← ) a tensor as the value of an expression , with the command \gets : [ tensor ] ← [ expression ] Expressions in tensor definitions can be applications of built-in tensor operations , such as transpose ( > ) , element-wise arithmetic ( including element-wise reductions such as sum ∑ and product ∏ ) , dot product ( · ) , and indexing ( · [ · ] ) . The expression in a tensor definition can also be a tensor constructor , which gives an entry-by-entry definition of a tensor . For example , one can define the M × N attention matrix in Transformer ( Vaswani et al. , 2017 ) as { softmax ( q > [ i ] · k [ j ] √ d ) } N j=0 M i=0 { softmax ( q > [ i ] · k [ j ] √ d ) } i j=0 M i=0 ( 1 ) where d is embedding dimension , and q and k are M × d and N × d matrices consisting of query and key vectors , respectively . The one on the left is full self-attention whereas the right is auto-regressive , i.e . only attentive to tokens behind the current one ; in a sequence-to-sequence model , the first is often used in encoder and the second in decoder . One can similarly define a multi-head attention matrix , by adding another index that ranges over [ 0 , H ) , where H is the number of heads ( see supplementary for the full Transformer model ) . One can also define a tensor in kokoyi-lang by giving a recurrence relation among its parts , such as rows and columns . For example , the hidden state matrix of a vanilla recurrent neural network ( RNN ) can be given by h [ 0≤t≤L ] ← { h0 t = 0 tanh ( Wh · h [ t−1 ] +Wx · x [ t ] + b ) otherwise ( 2 ) where x is a matrix with L rows , each being the embedding of an input token , and Wh , Wx , and h0 are learnable parameters . This is an example of KOKOYI module that we will describe next . To facilitate compiler optimization , programmers need to annotate indices where recurrence relations exist with a range ( e.g . h [ 0≤t≤L ] ) , which can depend on the dimensions of other tensors as in the case of tensor constructor . Also notice the use of \begin { cases } and \end { cases } ( rendered as the big left brace { ) in Eq . ( 2 ) for branching ( Fig . 1 ( b ) ) It is common for a recurrence relation to involve more than one tensor . For example , the hidden state and memory cell of a long short-term memory ( LSTM ) network ( Hochreiter & Schmidhuber , 1997 ) depends on the network ’ s forget , input , and output gates , which in turn depends on the previous hidden state and memory cell . This recursive mutual dependency can be expressed in kokoyi-lang as f [ 1≤t≤L ] ← σ ( Uf · h+ Vf · x [ t ] + bf ) i [ 1≤t≤L ] ← σ ( Ui · h+ Vi · x [ t ] + bi ) o [ 1≤t≤L ] ← σ ( Uo · h+ Vo · x [ t ] + bo ) c̃ [ 1≤t≤L ] ← σ ( Uc · h+ Vc · x [ t ] + bc ) c [ 1≤t≤L ] ← { c0 t = 0 f [ t ] ◦ c [ t−1 ] + i [ t ] ◦ c̃ [ t ] otherwise h [ 1≤t≤L ] ← { h0 t = 0 o [ t ] ◦ tanh ( c [ t ] ) otherwise Function and module The syntax for tensor definition applies to function definitions as well . The following is a definition of the sigmoid function : σ ( x ) ← 1 1 + e−x kokoyi-lang borrows the concept of module from popular DL frameworks such as PyTorch ( Paszke et al. , 2019 ) to enable encapsulation of learnable parameters . Conceptually , a module is a function parameterized by learnable parameters that are updated iteratively , for example , by gradient descent . Consistent with the DL practice , kokoyi-lang separates learnable parameters from parameters that the function can be applied to using a semicolon : f ( x ; θ ) . Similarly , if f uses a component module g that itself contains learnable parameters , it is written as f ( x ; g ) . The syntax of module definition resembles those of LATEX packages for pseudocode . Fig . 1 ( b ) demonstrates the major concepts of kokoyi-lang with a L-layer RNN , where the final hidden representation h [ L ] embeds the input sentence S that contains the indices of embedding of the tokens . Wx , Wh , b , h0 and the embedding table D are all trainable parameters . Note that |S| is a syntactic sugar to retrieve the first dimension of a multi-dimensional tensor , whereas the more general GetShape function returns the complete shape . { 0 } 1||S combines a tensor construction ( of a zero input ) and concatenation to right-shift the input . The rest of the codes are all self-explanatory . | This paper proposes Kokoyi, which can automatically translate mathematics into Python implementations. The proposed tool consists of kokoyi-lang, a programming language with the syntax of LATEX and the semantics of deep learning mathematics, and kokoyi-lang, a compiler and runtime supporting advanced optimizations. Kokoyi is integrated with Jupyter Notebook. To measure the flexibility of Kokoyi, the authors have implemented a variety of popular DL models and performed evaluation. | SP:e5bb31f2b7a8d8c2c605b71fc625890bb4f5a388 |
Contrastive Embeddings for Neural Architectures | The performance of algorithms for neural architecture search strongly depends on the parametrization of the search space . We use contrastive learning to identify networks across different initializations based on their data Jacobians and their number of parameters , and automatically produce the first architecture embeddings independent from the parametrization of the search space . Using our contrastive embeddings , we show that traditional black-box optimization algorithms , without modification , can reach state-of-the-art performance in Neural Architecture Search . As our method provides a unified embedding space , we successfully perform transfer learning between search spaces . Finally , we show the evolution of embeddings during training , motivating future studies into using embeddings at different training stages to gain a deeper understanding of the networks in a search space . 1 INTRODUCTION . Traditionally , the design of state-of-the-art neural network architectures is informed by domain knowledge and requires a large amount of manual work to find the best hyperparameters . However , automated architecture search methods have recently achieved state-of-the-art results on tasks such as image classification , object detection , semantic segmentation and speech recognition ( Ren et al. , 2020 ) . While a significant amount of work has gone into improving the algorithms used for architecture search , there has been only limited work on constructing the space in which these algorithms operate . There exists a vast number of different ways in which the architectures in a search space can be encoded , and the effects of these choices will affect the performance of the search algorithms . Previous embedding methods such as ( Yan et al. , 2020 ) have focused on preserving the edit distance between the computational graph of the architectures in the embedding space . Mellor et al . ( 2020 ) have showed that statistics computed on architectures at initialization , before training , can be used to infer which will perform better after training . Motivated by this , our intent is to automatically learn such statistics at initialization and generate an embedding space based on them , so that the resulting embedding space preserves fundamental properties of the architectures . To achieve our goal , we leverage contrastive learning , that has gathered interest in the computer vision community , and produced various state-of-the-art results ( He et al. , 2020 ; Chen et al. , 2020 ; Caron et al. , 2020 ; Grill et al. , 2020 ; Zbontar et al. , 2021 ) . In contrastive learning , the model learns an informative embedding space through a self-supervised pre-training phase : from the images in a batch , pairs are generated through random transformations , and the model is trained to generate similar ( dissimilar ) embeddings for similar ( dissimilar ) images . In this work , we combine contrastive learning with the theory presented by Wang et al . ( 2016 ) on the Jacobians of networks at initialization , in order to find an embedding space suitable for Neural Architecture Search . The embedding space that we generate is invariant to the search space of origin , allowing us to accomplish transfer learning between different search spaces . 1.1 MOTIVATIONS AND CONTRIBUTIONS . We design a method to produce architecture embeddings using Contrastive Learning and information available from their initial state . Our technique is capable of generating embeddings that are independent from the parametrization of the search space , and evolve during training . We leverage these contrastive embeddings in Neural Architecture Search using traditional black-box optimization algorithms . Moreover , since they provide a unified embedding space across search spaces , we exploit them to perform transfer learning between search spaces . Parametrization-independent embeddings NAS methods promise to outperform random search , however the encoding of the architectures must show some structure for the search algorithm to exploit . These encodings are typically produced by condensing all the parameters used to generate an architecture into a single vector . The method used to generate architectures from a search space thereby implicitly parametrizes it . The parametrization of the search space affects the performance of a NAS algorithm , as noted by e.g . White et al . ( 2020 ) ; Ying et al . ( 2019 ) ; Wang et al . ( 2019 ) . However , when performing architecture search , it is not feasible to test multiple different parametrizations of the search space and evaluate which one performs better : once we have started to evaluate networks in a search space , there is no reason to discard previous evaluations . While the current generation of NAS alleviates the need for experts in the design of architectures , now expert knowledge is needed to build and parametrize a search space compatible with the chosen search algorithm , implying that it is exceedingly difficult to outperform a simple random search . We present in Sec . 4 the first method to create networks embeddings without relying on their parametrization in any search space , through the combination of modern contrastive learning and the theory of data Jacobian matrices for neural architecture search . Evolution of the embeddings during training In Section 4.4 , we show how the embeddings vary during training , noting that the training procedure tends to connect areas with similar final test accuracy together . We hypothesize that this information could enable even more efficient architecture search methods in the future . Leveraging traditional black-box algorithms Existing methods to generate architecture embeddings rely on metrics from their computational graphs to identify similar architectures , either by explicitly trying to preserve the edit distance in the embedding space , or by leveraging more sophisticated methods from graph representation learning . Our method leverages the information contained in the Data Jacobian Matrix of networks at initialization to train a contrastive model . As such , it can produce embeddings that more meaningfully capture the structure of the search space . As a result , traditional black-box algorithms perform well for architecture search , as shown for NAS-Bench-201 ( Dong & Yang , 2020 ) in Section 5.1 . Transfer learning between search spaces Our method provides a unified embedding space , since it does not depend on the parametrization of networks in any search space . We exploit this feature to perform for the first time transfer learning between the two search spaces . In practice , we perform it between the size and the topology spaces in NATS-Bench ( Dong et al. , 2020 ) in Section 5.3 . 2 RELATED WORK . Neural Architecture Search Previous works have attempted to improve network embeddings : Klyuchnikov et al . ( 2020 ) use graph2vec ( Narayanan et al. , 2017 ) to find embeddings such that networks with the same computational graph share the same embeddings , and similarly Yan et al . ( 2020 ) produce embeddings that are invariant to graph isomorphisms . However , the method differs in that this work trains a variational graph isomorphic autoencoder to produce the embeddings . They show that their architecture embeddings arch2vec perform better on downstream architecture search than supervised alternatives , additionally the embeddings that they produce approximately preserves the edit distance of the DAGs in a continuous space . Wei et al . ( 2020 ) uses a contrastive loss to find a low dimensional metric space where the graph edit distances of the original parametrization is approximately preserved . In the absence of dense sampling , all of these works rely on the prior that the edit distance is a good predictor for relative performance . In contrast , our method , learns to find an embedding space based on intrinsic properties of the architectures . It can therefore discover properties about the architectures which are not present in their graph representation . Data Jacobian Methods based on the Jacobians with respect to the input of trained networks have been shown to provide valuable information for knowledge transfer and distillation ( Czarnecki et al. , 2017 ; Srinivas & Fleuret , 2018 ) , as well as analysis and regularization of networks ( Wang et al. , 2016 ) . Neural Tangent Kernel The Jacobian of the network with respect to the parameters is computed for inference with neural tangent kernels ( NTK ) ( Jacot et al. , 2018 ) . Using NTK as a proxy for NAS ( Park et al. , 2020 ) underperforms the neural network Gaussian process ( NNGP ) kernel . The NNGP provides an inexpensive signal for predicting if an architecture exceeds median performance , but it is worse than training for a few epochs in predicting the order of the top performing architectures . Contrastive learning Different techniques have been developed in contrastive learning . He et al . ( 2020 ) train a network with a contrastive loss against a memory bank of negative samples produced by a slowly moving average version of itself . Chen et al . ( 2020 ) remove the memory bank and just consider negative samples from within the same minibatch . Grill et al . ( 2020 ) remove the negative samples completely but stabilize the training by encoding the positive samples using a momentum encoder . Zbontar et al . ( 2021 ) use the correlation matrix between the features of the different augmentations and maximize the similarity of the same feature while minimizing the redundancy between features . 3 BACKGROUND . 3.1 TRADITIONAL ARCHITECTURE EMBEDDINGS . A decision tree is created either implicitly or explicitly to sample networks from a search space . To encode an architecture , one records all choices as the decision tree is traversed into a vector , which is then used as the embedding of the architecture . Without any additional knowledge , a NAS algorithm will assume that all choices in the decision tree have an equal importance on the characteristics of an architecture . One commonly used encoding scheme consists of choosing the operations on nodes along with a binary adjacency matrix connecting them . 3.2 DATA JACOBIAN . Extended Data Jacobian matrices are used by Wang et al . ( 2016 ) to analyze trained neural networks . We ground our work in their theoretical setting , and introduce the relevant concepts below . Multi Layer Perceptrons with ReLU activations are locally equivalent to a linear model : the ReLU after a linear layer can be combined into a single linear layer , where each row in the matrix is replaced by zeros if the output pre-activation is negative . ReLU ( Wx ) = Ŵx , Ŵij = { Wij if ( Wx ) j ≥ 0 0 otherwise Since matrix multiplication is closed , within a neighborhood where the signs of all neurons preactivation is constant , the full network can be replaced by a single matrix . This property can be extended to any model whose layers can be rewritten as linear layers , including convolutional layers and average pooling layers . Max pooling layers also retain this property , and can be treated similarly to ReLU . Therefore , in a local neighbourhood close to x , the full information of a network , f , is contained within its Data Jacobian Matrix ( DJM ) . DJM ( x ) = ∂f ( x ) ∂x and within that neighbourhood f ( x ) = DJM ( x ) x We can evaluate the Data Jacobian Matrix at many different points x to gather information about multiple different neighbourhoods . If we assume the network to have a single output , its DJM is a vector , and we can then stack the DJMs at different points to form the Extended Data Jacobian Matrix ( EDJM ) . If a network has multiple outputs we can sum them to get a single output , which we use to calculate the EDJM . Wang et al . ( 2016 ) use the singular values of the EDJM to compute a score , and empirically show that the score is correlated with the depth of the network , and its model capacity . | The paper proposes a new technique to generate embeddings agnostic of the search space. This can be achieved by first computing the data jacobian of the network with respect to datapoints sampled from different neighbourhoods. This jacobian matrix is then input to a contrastive network which produces architecture embeddings. The contrastive views in this case are different initializations of the same network, which in turn must yield the same embedding. As these embeddings are high dimensional, they are reduced to lower dimension while ensuring that the distance between the embeddings is preserved in the lower dimensional space and the volume associated with each architecture is also preserved. | SP:eef7b413ff0071fd73687ae4ccc3bc753df12742 |
Contrastive Embeddings for Neural Architectures | The performance of algorithms for neural architecture search strongly depends on the parametrization of the search space . We use contrastive learning to identify networks across different initializations based on their data Jacobians and their number of parameters , and automatically produce the first architecture embeddings independent from the parametrization of the search space . Using our contrastive embeddings , we show that traditional black-box optimization algorithms , without modification , can reach state-of-the-art performance in Neural Architecture Search . As our method provides a unified embedding space , we successfully perform transfer learning between search spaces . Finally , we show the evolution of embeddings during training , motivating future studies into using embeddings at different training stages to gain a deeper understanding of the networks in a search space . 1 INTRODUCTION . Traditionally , the design of state-of-the-art neural network architectures is informed by domain knowledge and requires a large amount of manual work to find the best hyperparameters . However , automated architecture search methods have recently achieved state-of-the-art results on tasks such as image classification , object detection , semantic segmentation and speech recognition ( Ren et al. , 2020 ) . While a significant amount of work has gone into improving the algorithms used for architecture search , there has been only limited work on constructing the space in which these algorithms operate . There exists a vast number of different ways in which the architectures in a search space can be encoded , and the effects of these choices will affect the performance of the search algorithms . Previous embedding methods such as ( Yan et al. , 2020 ) have focused on preserving the edit distance between the computational graph of the architectures in the embedding space . Mellor et al . ( 2020 ) have showed that statistics computed on architectures at initialization , before training , can be used to infer which will perform better after training . Motivated by this , our intent is to automatically learn such statistics at initialization and generate an embedding space based on them , so that the resulting embedding space preserves fundamental properties of the architectures . To achieve our goal , we leverage contrastive learning , that has gathered interest in the computer vision community , and produced various state-of-the-art results ( He et al. , 2020 ; Chen et al. , 2020 ; Caron et al. , 2020 ; Grill et al. , 2020 ; Zbontar et al. , 2021 ) . In contrastive learning , the model learns an informative embedding space through a self-supervised pre-training phase : from the images in a batch , pairs are generated through random transformations , and the model is trained to generate similar ( dissimilar ) embeddings for similar ( dissimilar ) images . In this work , we combine contrastive learning with the theory presented by Wang et al . ( 2016 ) on the Jacobians of networks at initialization , in order to find an embedding space suitable for Neural Architecture Search . The embedding space that we generate is invariant to the search space of origin , allowing us to accomplish transfer learning between different search spaces . 1.1 MOTIVATIONS AND CONTRIBUTIONS . We design a method to produce architecture embeddings using Contrastive Learning and information available from their initial state . Our technique is capable of generating embeddings that are independent from the parametrization of the search space , and evolve during training . We leverage these contrastive embeddings in Neural Architecture Search using traditional black-box optimization algorithms . Moreover , since they provide a unified embedding space across search spaces , we exploit them to perform transfer learning between search spaces . Parametrization-independent embeddings NAS methods promise to outperform random search , however the encoding of the architectures must show some structure for the search algorithm to exploit . These encodings are typically produced by condensing all the parameters used to generate an architecture into a single vector . The method used to generate architectures from a search space thereby implicitly parametrizes it . The parametrization of the search space affects the performance of a NAS algorithm , as noted by e.g . White et al . ( 2020 ) ; Ying et al . ( 2019 ) ; Wang et al . ( 2019 ) . However , when performing architecture search , it is not feasible to test multiple different parametrizations of the search space and evaluate which one performs better : once we have started to evaluate networks in a search space , there is no reason to discard previous evaluations . While the current generation of NAS alleviates the need for experts in the design of architectures , now expert knowledge is needed to build and parametrize a search space compatible with the chosen search algorithm , implying that it is exceedingly difficult to outperform a simple random search . We present in Sec . 4 the first method to create networks embeddings without relying on their parametrization in any search space , through the combination of modern contrastive learning and the theory of data Jacobian matrices for neural architecture search . Evolution of the embeddings during training In Section 4.4 , we show how the embeddings vary during training , noting that the training procedure tends to connect areas with similar final test accuracy together . We hypothesize that this information could enable even more efficient architecture search methods in the future . Leveraging traditional black-box algorithms Existing methods to generate architecture embeddings rely on metrics from their computational graphs to identify similar architectures , either by explicitly trying to preserve the edit distance in the embedding space , or by leveraging more sophisticated methods from graph representation learning . Our method leverages the information contained in the Data Jacobian Matrix of networks at initialization to train a contrastive model . As such , it can produce embeddings that more meaningfully capture the structure of the search space . As a result , traditional black-box algorithms perform well for architecture search , as shown for NAS-Bench-201 ( Dong & Yang , 2020 ) in Section 5.1 . Transfer learning between search spaces Our method provides a unified embedding space , since it does not depend on the parametrization of networks in any search space . We exploit this feature to perform for the first time transfer learning between the two search spaces . In practice , we perform it between the size and the topology spaces in NATS-Bench ( Dong et al. , 2020 ) in Section 5.3 . 2 RELATED WORK . Neural Architecture Search Previous works have attempted to improve network embeddings : Klyuchnikov et al . ( 2020 ) use graph2vec ( Narayanan et al. , 2017 ) to find embeddings such that networks with the same computational graph share the same embeddings , and similarly Yan et al . ( 2020 ) produce embeddings that are invariant to graph isomorphisms . However , the method differs in that this work trains a variational graph isomorphic autoencoder to produce the embeddings . They show that their architecture embeddings arch2vec perform better on downstream architecture search than supervised alternatives , additionally the embeddings that they produce approximately preserves the edit distance of the DAGs in a continuous space . Wei et al . ( 2020 ) uses a contrastive loss to find a low dimensional metric space where the graph edit distances of the original parametrization is approximately preserved . In the absence of dense sampling , all of these works rely on the prior that the edit distance is a good predictor for relative performance . In contrast , our method , learns to find an embedding space based on intrinsic properties of the architectures . It can therefore discover properties about the architectures which are not present in their graph representation . Data Jacobian Methods based on the Jacobians with respect to the input of trained networks have been shown to provide valuable information for knowledge transfer and distillation ( Czarnecki et al. , 2017 ; Srinivas & Fleuret , 2018 ) , as well as analysis and regularization of networks ( Wang et al. , 2016 ) . Neural Tangent Kernel The Jacobian of the network with respect to the parameters is computed for inference with neural tangent kernels ( NTK ) ( Jacot et al. , 2018 ) . Using NTK as a proxy for NAS ( Park et al. , 2020 ) underperforms the neural network Gaussian process ( NNGP ) kernel . The NNGP provides an inexpensive signal for predicting if an architecture exceeds median performance , but it is worse than training for a few epochs in predicting the order of the top performing architectures . Contrastive learning Different techniques have been developed in contrastive learning . He et al . ( 2020 ) train a network with a contrastive loss against a memory bank of negative samples produced by a slowly moving average version of itself . Chen et al . ( 2020 ) remove the memory bank and just consider negative samples from within the same minibatch . Grill et al . ( 2020 ) remove the negative samples completely but stabilize the training by encoding the positive samples using a momentum encoder . Zbontar et al . ( 2021 ) use the correlation matrix between the features of the different augmentations and maximize the similarity of the same feature while minimizing the redundancy between features . 3 BACKGROUND . 3.1 TRADITIONAL ARCHITECTURE EMBEDDINGS . A decision tree is created either implicitly or explicitly to sample networks from a search space . To encode an architecture , one records all choices as the decision tree is traversed into a vector , which is then used as the embedding of the architecture . Without any additional knowledge , a NAS algorithm will assume that all choices in the decision tree have an equal importance on the characteristics of an architecture . One commonly used encoding scheme consists of choosing the operations on nodes along with a binary adjacency matrix connecting them . 3.2 DATA JACOBIAN . Extended Data Jacobian matrices are used by Wang et al . ( 2016 ) to analyze trained neural networks . We ground our work in their theoretical setting , and introduce the relevant concepts below . Multi Layer Perceptrons with ReLU activations are locally equivalent to a linear model : the ReLU after a linear layer can be combined into a single linear layer , where each row in the matrix is replaced by zeros if the output pre-activation is negative . ReLU ( Wx ) = Ŵx , Ŵij = { Wij if ( Wx ) j ≥ 0 0 otherwise Since matrix multiplication is closed , within a neighborhood where the signs of all neurons preactivation is constant , the full network can be replaced by a single matrix . This property can be extended to any model whose layers can be rewritten as linear layers , including convolutional layers and average pooling layers . Max pooling layers also retain this property , and can be treated similarly to ReLU . Therefore , in a local neighbourhood close to x , the full information of a network , f , is contained within its Data Jacobian Matrix ( DJM ) . DJM ( x ) = ∂f ( x ) ∂x and within that neighbourhood f ( x ) = DJM ( x ) x We can evaluate the Data Jacobian Matrix at many different points x to gather information about multiple different neighbourhoods . If we assume the network to have a single output , its DJM is a vector , and we can then stack the DJMs at different points to form the Extended Data Jacobian Matrix ( EDJM ) . If a network has multiple outputs we can sum them to get a single output , which we use to calculate the EDJM . Wang et al . ( 2016 ) use the singular values of the EDJM to compute a score , and empirically show that the score is correlated with the depth of the network , and its model capacity . | The paper proposes a self-supervised embedding learning method to learn embeddings of various-sized neural network architectures. Each network is first represented as a low rank projection of a Jacobian matrix, where the rows are Jacobians (output-averaged if multivariate) evaluated at various inputs at random initialization time, called EDJM. Since EDJM is random, multiple such representations are treated as positive pairs for contrastive learning. From the contrastively learned embedding, a further dimensionality-reduction stage is optimized to (1) preserve distances and (2) achieve uniform volume for each architecture. The main application of the final embedding is NAS, where the method outperform baselines. | SP:eef7b413ff0071fd73687ae4ccc3bc753df12742 |
Lagrangian Method for Episodic Learning | 1 INTRODUCTION . Episodic learning Terry ( 2017 ) is a general learning paradigm in which the agent learns based on data collected from a sequence of episodes of environmental interactions . Each episode consists of a finite number of decision steps , and the goal is to maximize an episode-wise performance . With one-step episodes , episodic learning subsumes supervised learning and bandit problems as special cases ; for sequential decision making problems , an episode may contain multiple ( and possibly varying number of ) steps , in which case episode learning encompasses most reinforcement learning and imitation learning problems encountered in current AI practice . A pervasive idea of episodic learning , often called the value-based approach Sutton and Barto ( 2018 ) , seeks to find a value function such that greedily selecting the actions with the best values ( as prescribed by this function ) gives good performance . For example , in most classification tasks , most deep neural networks are learned ( and used ) to represent such a value function . Finding good value functions can be highly challenging , especially for tasks with multi-step episodes . In this case , the classic Bellman optimality equation Bertsekas ( 2019 ) gives a sufficient condition of optimal value function ( i.e . value functions with which greedy decisions have optimal performance ) . In this paper , we study a Lagrangian method which learns optimal value functions by characterizing them as saddle points of the Lagrangian function of a variational ( re- ) formulation of the Bellman equation . The method reveals an elegant thoery on policy-value duality in machine learning . A restricted form of this high-level idea has been known as the Linear Programming ( LP ) approach in the literature of dynamic programming Puterman ( 1994 ) ; Si et al . ( 2004 ) ; Chen and Wang ( 2016 ) ; Wang ( 2017 ) , which has also been introduced to reinforcement learning in recent years Dai et al . ( 2018 ) ; Lee and He ( 2018 ) ; Nachum et al . ( 2019 ) ; Yang et al . ( 2020 ) ; Nachum and Dai ( 2020 ) . This paper provides a new perspective to the foundation of this elegant idea through the following contributions : Firstly , previous works are mostly limited to the discounted-reward setting , which bypass some technical difficulties Nachum and Dai ( 2020 ) , but deviate from the practical settings of many ( if not most ) real-world tasks . This paper fills this well-known gap between theory and practice by establishing a value-based theory ( and algorithm ) that targets at directly learning optimal value functions and policies for the undiscounted episode-wise reward ( see ( 1 ) in Section 2 ) . Secondly , previous works focused on linear formulations , which enjoys the strong duality property and generic LP techniques , but are consequently limited to treatments for the state-value functions ( i.e . V-functions ) in policy optimization setting . However , the optimal state-value function can not be directly used to derive optimal policy in model-free learning . In contrast , we proved a minimax theorem for the nonlinear Lagrangian functions associated with the action-value functions ( i.e . Qfunctions ) . The theorem opens a door to nonlinear variational treatment to the Bellman equation , which now enjoys the strong duality property too ( see Section 3 ) . In particular , we developed a simple imitation learning algorithm based on the Lagrangian duality thus established , and applied the algorithm to Machine Translation ( MT ) as case study . Transformer models trained by our algorithm achieves beam-search-level performance Vaswani et al . ( 2017 ) with only greedy decoding , and leads to 1.4-BLEU improvement when also equipping with beam search ( see Section 4 ) . Last but not the least , previous works have exclusively focused on the primal-form optimization formulation of the Bellman equation , thus are limited to solving the minimax-type saddle points of the corresponding Lagrangian function . We however showed that the minimax saddle points are not necessarily optimal value functions in learning settings . Instead , another class of the Lagrangian saddle points – the maximin-type saddle points which are derived from the dual-form optimization formulation – turn out to have rigorous guarantee on optimality . This observation points to new directions for this decades-old topic ( see Section 5 ) . 2 EPISODIC LEARNING PROCESS AND BELLMAN OPTIMAL VALUE . We start with introducing the mathematical formulation of episode learning problems , as well as the formal definitions of related concepts . An infinite-horizon MDP is a tuple ( S , A , R , P , ρ0 ) , where S is the state space , A the action space , R ( s ) ∈ [ rmin , rmax ] is a bounded reward ( possibly negative ) associated to each state s ∈ S , 1 P ( s′|s , a ) specifies action-conditioned transition probabilities between states , and ρ0 is the initial distribution with S0 ∼ ρ0 . A MDP is finite if both S and A are finite sets . A policy function π : S × A → [ 0 , 1 ] specifies the action selection probabilities under each state , which induces a markov chain Pπ [ St+1 = s′|St = s ] . = ∑ a∈A P ( s ′|s , a ) · π ( a|s ) . Let Π denote the policy space , i.e. , the set of all policies . Without loss of generality , we assume every state s in the state space S is reachable from the initial state under at least one policy , 2 where reachable means ∃π ∈ Π , ∑∞ t=0 Pπ [ St = s ] > 0 . An Episodic Learning Process ( ELP ) White ( 2017 ) ; Bojun ( 2020 ) is an infinite-horizon MDP that repeatedly encounters into , and is reset by , a group of terminal states S⊥ ( despite its name , a terminal state does not terminate the ELP ) . Formally , an infinite-horizon MDP is an ELP if there is a non-empty subset S⊥ ⊆ S such that ( 1 ) all terminal states have homogeneous and action-agnostic outbound probabilities : P ( s′|s1 , a1 ) = P ( s′|s2 , a2 ) , ∀s1 , s2 ∈ S⊥ , ∀a1 , a2 ∈ A , ∀s′ ∈ S , ( 2 ) the initial state is a terminal state : ρ0 ( s ) = 0 , ∀s 6∈ S⊥ , and ( 3 ) the average episode length is finite under any policy : ∀π ∈ Π , Eπ [ T ] < ∞ , where T . = inf { t ≥ 1 : St ∈ S⊥ } is called the termination time . Bojun ( 2020 ) proved that in any ELP , every policy π ∈ Π has a unique stationary distribution ρπ ( s , a ) such that Pπ [ St+1 = s , At+1 = a ] = ρπ ( s , a ) if Pπ [ St = s , At = a ] = ρπ ( s , a ) . Following White ( 2017 ) and Bojun ( 2020 ) , we use the ELP formalism to study finite-time decision tasks , which arguably account for most AI tasks encountered in current practice . The ELP model formulates the learning process of such task , which consists of an infinite number of concatenated episodes , each starting and ending at an terminal state . In general , an episode can be arbitrarily long , but will terminate in finite steps with probability 1 . The learning agent may choose to use a different behavior policy βt for each step t of the episodic learning process ( based on data collected from 1Our state-based reward formulation follows Schulman et al . ( 2015 ) and Bojun ( 2020 ) , and is equivalent to action-based reward formulations in terms of expressiveness . Refer to Appendix A of Bojun ( 2020 ) for details . 2Unreachable states are irrelevant to the real world whatsoever , so are excluded from our treatment . previous steps ) , and the goal is to have βt converge ( as t goes to infinity ) towards an optimal policy that maximizes the expected episode-wise cumulative reward : J ( π ) . = E ζ∼π [ ∑T ( ζ ) t=1 R ( St ) ] ( 1 ) where ζ = ( S0 , S1 , S2 , . . . ) is an infinite trajectory of the markov chain induced by policy π . Note that the termination time T is a random variable whose value may vary for different trajectory ζ . See Section D.3 for the ELP formulation of machine translation as an example . A value function Q : S × A → R assigns a real number to each state-action pair ( s , a ) as the “ perceived benefit ” of doing action a under state s. We call the set of all possible value functions , the value space , denoted by Q . A value function Q ∈ Q is called an optimal value if ( any of the ) Q-greedy policy with πQ ( a|s ) > 0 ⇒ Q ( s , a ) = maxāQ ( s , ā ) is an optimal policy . Bellman optimal value is a special optimal value function that is characterized by the Bellman optimality operator . In its general form Degris et al . ( 2012 ) ; Sutton et al . ( 2011 ; 2014 ; 2016 ) , a generalized Bellman optimality operator Bγ : Q → Q transforms a value function Q ∈ Q into another value function BγQ ∈ Q such that for any ( s , a ) ∈ S ×A , Bγ Q ( s , a ) .= ∑ s′∈S P ( s′|s , a ) · ( R ( s′ ) + γ ( s′ ) ·max a′∈A Q ( s′ , a′ ) ) ( 2 ) where γ : S → [ 0 , 1 ] is a discounting function over the states . As a classic result , if the discounting function equals a constant γc < 1 on every state , the corresponding Bellman optimality operator Bγc has a unique fixed point in the value space of any MDP . However , the fixed point of Bγc is generally not an optimal value with respect to the undiscounted objective ( 1 ) . In the following , we present a new theorem that guarantees the uniqueness of Bellman fixed point for a more general class of discounting functions , among which a particular “ episodic discounting function ” gives optimal value w.r.t . objective ( 1 ) . Both the uniqueness and the optimality are fundamentally rooted from an inherent graph property of ELP-MDPs . See the proof in Appendix A for detailed elaboration . Theorem 1 . In any finite Episodic Learning Process ( S , A , P , R , ρ0 ) , let γ be any discounting function such that γ ( s ) < 1 for all terminal state s ∈ S⊥ , then ( 1 ) Bγ has a unique fixed point , i.e. , the equation Q = BγQ has a unique solution . ( 2 ) The fixed point of Bγ is the limiting point of repeatedly applying Bγ to any Q ∈ Q . ( 3 ) The fixed point of Bγ is an optimal value w.r.t . objective ( 1 ) if γ is the following episodic discounting function : γ epi ( s ) . = 1 [ s 6∈ S⊥ ] = { 1 for s 6∈ S⊥ 0 for s ∈ S⊥ . ( 3 ) Since we focus on optimizing objective ( 1 ) , in the rest of the paper : Bellman optimality operator , denoted by B .= Bγ epi , refers to the specific operator ( 2 ) that uses the particular episodic discounting function ( 3 ) ; accordingly , Bellman optimal value , denoted by Q∗ , refers to the unique fixed point of B ; and similarly , Bellman optimality equation refers to the fixed-point equation Q = BQ under episodic discounting , or more explicitly , refers to the following system of non-linear equations : Q ( s , a ) = ∑ s′∈S max a′∈A P ( s′|s , a ) · ( R ( s′ ) + γ epi ( s ′ ) ·Q ( s′ , a′ ) ) , ∀ ( s , a ) ∈ S ×A ( 4 ) It is worthwhile noting that although the Bellman optimal value function is unique , there can be many optimal value functions in an episodic learning problem . In particular , any value function that gives the same preference order with the Bellman optimality value is an optimal value function . | In this work, the authors consider nonlinear Q-form Lagrangian function and show corresponding strong duality property. The main contributions are 1) the new proof of showing the duality gap as zero from a minmax perspective; 2) the generality of the theory with applications to machine translation tasks. A imitation learning algorithm is proposed as well and it turns out that it works well compared with the existing one. | SP:89a7b0e7a6b9dc575b6686ac9bad37853b82c321 |
Lagrangian Method for Episodic Learning | 1 INTRODUCTION . Episodic learning Terry ( 2017 ) is a general learning paradigm in which the agent learns based on data collected from a sequence of episodes of environmental interactions . Each episode consists of a finite number of decision steps , and the goal is to maximize an episode-wise performance . With one-step episodes , episodic learning subsumes supervised learning and bandit problems as special cases ; for sequential decision making problems , an episode may contain multiple ( and possibly varying number of ) steps , in which case episode learning encompasses most reinforcement learning and imitation learning problems encountered in current AI practice . A pervasive idea of episodic learning , often called the value-based approach Sutton and Barto ( 2018 ) , seeks to find a value function such that greedily selecting the actions with the best values ( as prescribed by this function ) gives good performance . For example , in most classification tasks , most deep neural networks are learned ( and used ) to represent such a value function . Finding good value functions can be highly challenging , especially for tasks with multi-step episodes . In this case , the classic Bellman optimality equation Bertsekas ( 2019 ) gives a sufficient condition of optimal value function ( i.e . value functions with which greedy decisions have optimal performance ) . In this paper , we study a Lagrangian method which learns optimal value functions by characterizing them as saddle points of the Lagrangian function of a variational ( re- ) formulation of the Bellman equation . The method reveals an elegant thoery on policy-value duality in machine learning . A restricted form of this high-level idea has been known as the Linear Programming ( LP ) approach in the literature of dynamic programming Puterman ( 1994 ) ; Si et al . ( 2004 ) ; Chen and Wang ( 2016 ) ; Wang ( 2017 ) , which has also been introduced to reinforcement learning in recent years Dai et al . ( 2018 ) ; Lee and He ( 2018 ) ; Nachum et al . ( 2019 ) ; Yang et al . ( 2020 ) ; Nachum and Dai ( 2020 ) . This paper provides a new perspective to the foundation of this elegant idea through the following contributions : Firstly , previous works are mostly limited to the discounted-reward setting , which bypass some technical difficulties Nachum and Dai ( 2020 ) , but deviate from the practical settings of many ( if not most ) real-world tasks . This paper fills this well-known gap between theory and practice by establishing a value-based theory ( and algorithm ) that targets at directly learning optimal value functions and policies for the undiscounted episode-wise reward ( see ( 1 ) in Section 2 ) . Secondly , previous works focused on linear formulations , which enjoys the strong duality property and generic LP techniques , but are consequently limited to treatments for the state-value functions ( i.e . V-functions ) in policy optimization setting . However , the optimal state-value function can not be directly used to derive optimal policy in model-free learning . In contrast , we proved a minimax theorem for the nonlinear Lagrangian functions associated with the action-value functions ( i.e . Qfunctions ) . The theorem opens a door to nonlinear variational treatment to the Bellman equation , which now enjoys the strong duality property too ( see Section 3 ) . In particular , we developed a simple imitation learning algorithm based on the Lagrangian duality thus established , and applied the algorithm to Machine Translation ( MT ) as case study . Transformer models trained by our algorithm achieves beam-search-level performance Vaswani et al . ( 2017 ) with only greedy decoding , and leads to 1.4-BLEU improvement when also equipping with beam search ( see Section 4 ) . Last but not the least , previous works have exclusively focused on the primal-form optimization formulation of the Bellman equation , thus are limited to solving the minimax-type saddle points of the corresponding Lagrangian function . We however showed that the minimax saddle points are not necessarily optimal value functions in learning settings . Instead , another class of the Lagrangian saddle points – the maximin-type saddle points which are derived from the dual-form optimization formulation – turn out to have rigorous guarantee on optimality . This observation points to new directions for this decades-old topic ( see Section 5 ) . 2 EPISODIC LEARNING PROCESS AND BELLMAN OPTIMAL VALUE . We start with introducing the mathematical formulation of episode learning problems , as well as the formal definitions of related concepts . An infinite-horizon MDP is a tuple ( S , A , R , P , ρ0 ) , where S is the state space , A the action space , R ( s ) ∈ [ rmin , rmax ] is a bounded reward ( possibly negative ) associated to each state s ∈ S , 1 P ( s′|s , a ) specifies action-conditioned transition probabilities between states , and ρ0 is the initial distribution with S0 ∼ ρ0 . A MDP is finite if both S and A are finite sets . A policy function π : S × A → [ 0 , 1 ] specifies the action selection probabilities under each state , which induces a markov chain Pπ [ St+1 = s′|St = s ] . = ∑ a∈A P ( s ′|s , a ) · π ( a|s ) . Let Π denote the policy space , i.e. , the set of all policies . Without loss of generality , we assume every state s in the state space S is reachable from the initial state under at least one policy , 2 where reachable means ∃π ∈ Π , ∑∞ t=0 Pπ [ St = s ] > 0 . An Episodic Learning Process ( ELP ) White ( 2017 ) ; Bojun ( 2020 ) is an infinite-horizon MDP that repeatedly encounters into , and is reset by , a group of terminal states S⊥ ( despite its name , a terminal state does not terminate the ELP ) . Formally , an infinite-horizon MDP is an ELP if there is a non-empty subset S⊥ ⊆ S such that ( 1 ) all terminal states have homogeneous and action-agnostic outbound probabilities : P ( s′|s1 , a1 ) = P ( s′|s2 , a2 ) , ∀s1 , s2 ∈ S⊥ , ∀a1 , a2 ∈ A , ∀s′ ∈ S , ( 2 ) the initial state is a terminal state : ρ0 ( s ) = 0 , ∀s 6∈ S⊥ , and ( 3 ) the average episode length is finite under any policy : ∀π ∈ Π , Eπ [ T ] < ∞ , where T . = inf { t ≥ 1 : St ∈ S⊥ } is called the termination time . Bojun ( 2020 ) proved that in any ELP , every policy π ∈ Π has a unique stationary distribution ρπ ( s , a ) such that Pπ [ St+1 = s , At+1 = a ] = ρπ ( s , a ) if Pπ [ St = s , At = a ] = ρπ ( s , a ) . Following White ( 2017 ) and Bojun ( 2020 ) , we use the ELP formalism to study finite-time decision tasks , which arguably account for most AI tasks encountered in current practice . The ELP model formulates the learning process of such task , which consists of an infinite number of concatenated episodes , each starting and ending at an terminal state . In general , an episode can be arbitrarily long , but will terminate in finite steps with probability 1 . The learning agent may choose to use a different behavior policy βt for each step t of the episodic learning process ( based on data collected from 1Our state-based reward formulation follows Schulman et al . ( 2015 ) and Bojun ( 2020 ) , and is equivalent to action-based reward formulations in terms of expressiveness . Refer to Appendix A of Bojun ( 2020 ) for details . 2Unreachable states are irrelevant to the real world whatsoever , so are excluded from our treatment . previous steps ) , and the goal is to have βt converge ( as t goes to infinity ) towards an optimal policy that maximizes the expected episode-wise cumulative reward : J ( π ) . = E ζ∼π [ ∑T ( ζ ) t=1 R ( St ) ] ( 1 ) where ζ = ( S0 , S1 , S2 , . . . ) is an infinite trajectory of the markov chain induced by policy π . Note that the termination time T is a random variable whose value may vary for different trajectory ζ . See Section D.3 for the ELP formulation of machine translation as an example . A value function Q : S × A → R assigns a real number to each state-action pair ( s , a ) as the “ perceived benefit ” of doing action a under state s. We call the set of all possible value functions , the value space , denoted by Q . A value function Q ∈ Q is called an optimal value if ( any of the ) Q-greedy policy with πQ ( a|s ) > 0 ⇒ Q ( s , a ) = maxāQ ( s , ā ) is an optimal policy . Bellman optimal value is a special optimal value function that is characterized by the Bellman optimality operator . In its general form Degris et al . ( 2012 ) ; Sutton et al . ( 2011 ; 2014 ; 2016 ) , a generalized Bellman optimality operator Bγ : Q → Q transforms a value function Q ∈ Q into another value function BγQ ∈ Q such that for any ( s , a ) ∈ S ×A , Bγ Q ( s , a ) .= ∑ s′∈S P ( s′|s , a ) · ( R ( s′ ) + γ ( s′ ) ·max a′∈A Q ( s′ , a′ ) ) ( 2 ) where γ : S → [ 0 , 1 ] is a discounting function over the states . As a classic result , if the discounting function equals a constant γc < 1 on every state , the corresponding Bellman optimality operator Bγc has a unique fixed point in the value space of any MDP . However , the fixed point of Bγc is generally not an optimal value with respect to the undiscounted objective ( 1 ) . In the following , we present a new theorem that guarantees the uniqueness of Bellman fixed point for a more general class of discounting functions , among which a particular “ episodic discounting function ” gives optimal value w.r.t . objective ( 1 ) . Both the uniqueness and the optimality are fundamentally rooted from an inherent graph property of ELP-MDPs . See the proof in Appendix A for detailed elaboration . Theorem 1 . In any finite Episodic Learning Process ( S , A , P , R , ρ0 ) , let γ be any discounting function such that γ ( s ) < 1 for all terminal state s ∈ S⊥ , then ( 1 ) Bγ has a unique fixed point , i.e. , the equation Q = BγQ has a unique solution . ( 2 ) The fixed point of Bγ is the limiting point of repeatedly applying Bγ to any Q ∈ Q . ( 3 ) The fixed point of Bγ is an optimal value w.r.t . objective ( 1 ) if γ is the following episodic discounting function : γ epi ( s ) . = 1 [ s 6∈ S⊥ ] = { 1 for s 6∈ S⊥ 0 for s ∈ S⊥ . ( 3 ) Since we focus on optimizing objective ( 1 ) , in the rest of the paper : Bellman optimality operator , denoted by B .= Bγ epi , refers to the specific operator ( 2 ) that uses the particular episodic discounting function ( 3 ) ; accordingly , Bellman optimal value , denoted by Q∗ , refers to the unique fixed point of B ; and similarly , Bellman optimality equation refers to the fixed-point equation Q = BQ under episodic discounting , or more explicitly , refers to the following system of non-linear equations : Q ( s , a ) = ∑ s′∈S max a′∈A P ( s′|s , a ) · ( R ( s′ ) + γ epi ( s ′ ) ·Q ( s′ , a′ ) ) , ∀ ( s , a ) ∈ S ×A ( 4 ) It is worthwhile noting that although the Bellman optimal value function is unique , there can be many optimal value functions in an episodic learning problem . In particular , any value function that gives the same preference order with the Bellman optimality value is an optimal value function . | This paper studies the Q learning in episodic learning from a Lagrangian formulation of the Q-form Bellman optimality equation. On the theory side, this paper studies the (1) fixed point of the bellman optimality operator when the discounting factor for non-terminal states are 1; (2) strong duality of the considered constrained optimization, i.e., saddle point, reformulation; and (3)maximin type saddle points. On the empirical side, this paper studies the efficacy of the minimax imitation learning algorithm for a machine translation application. | SP:89a7b0e7a6b9dc575b6686ac9bad37853b82c321 |
Neural Manifold Clustering and Embedding | Given a union of non-linear manifolds , non-linear subspace clustering or manifold clustering aims to cluster data points based on manifold structures and also learn to parameterize each manifold as a linear subspace in a feature space . Deep neural networks have the potential to achieve this goal under highly non-linear settings given their large capacity and flexibility . We argue that achieving manifold clustering with neural networks requires two essential ingredients : a domain-specific constraint that ensures the identification of the manifolds , and a learning algorithm for embedding each manifold to a linear subspace in the feature space . This work shows that many constraints can be implemented by data augmentation . For subspace feature learning , Maximum Coding Rate Reduction ( MCR2 ) objective can be used . Putting them together yields Neural Manifold Clustering and Embedding ( NMCE ) , a novel method for general purpose manifold clustering , which significantly outperforms autoencoder-based deep subspace clustering and achieve state-of-the-art performance on several important benchmarks . Further , on more challenging natural image datasets , NMCE can also outperform other algorithms specifically designed for clustering . Qualitatively , we demonstrate that NMCE learns a meaningful and interpretable feature space . As the formulation of NMCE is closely related to several important Self-supervised learning ( SSL ) methods , we believe this work can help us build a deeper understanding on SSL representation learning . 1 INTRODUCTION . Here we investigate unsupervised representation learning , which aims to learn structures ( features ) from data without the use of any label . If the data lie in a linear subspace , the linear subspace can be extracted by Principal Component Analysis ( PCA ) ( Jolliffe , 1986 ) , one of the most basic forms of unsupervised learning . When the data occupy a union of several linear subspaces , subspace clustering ( SC ) ( Vidal et al. , 2016 ) is needed to cluster each data point to a subspace as well as estimating the parameters of each subspace . Here we are concerned with even more challenging scenarios , when data points come from a union of several non-linear low-dimensional manifolds . In such scenarios , the clustering problem can be formulated as follows ( Elhamifar & Vidal , 2011 ) : Task 1 . Manifold Clustering and Embedding : Given that the data points come from a union of lowdimensional manifolds , we shall segment the data points based on their corresponding manifolds , and obtain a low-dimensional embedding for each manifold . Various methods have been developed to solve this problem ( Abdolali & Gillis , 2021 ) , but it is still an open question how to use neural networks effectively in manifold clustering problems ( Haeffele et al. , 2020 ) . In this paper , we propose neural manifold clustering and embedding ( NMCE ) that follows three principles : 1 ) The clustering and representation should respect a domain-specific constraint , e.g . local neighborhoods , local linear interpolation or data augmentation invariances . 2 ) The embedding of a particular manifold shall not collapse . 3 ) The embedding of identified manifolds shall be linearized and separated , i.e . they occupy different linear subspaces . We achieve 1 ) using data augmentations , and achieve 2 ) and 3 ) with the subspace feature learning algorithm maximal coding rate reduction ( MCR2 ) objective ( Yu et al. , 2020 ) . This work make the following specific contributions : 1 . We combine data augmentation with MCR2 to yield a novel algorithm for general purpose manifold clustering and embedding ( NMCE ) . We also discuss connections between the algorithm and self-supervised contrastive learning . 2 . We demonstrate that NMCE achieves state-of-the-art performance on standard subspace clustering benchmarks , and can outperform the best clustering algorithms on more challenging high-dimensional image datasets like CIFAR-10 and CIFAR-20 . Further , empirical evaluation suggests that our algorithm also learns a meaningful feature space . 2 RELATED WORK . Manifold Learning . In classical manifold learning , the goal is to map the manifold-structured data points to a low-dimensional representation space such that the manifold structure is preserved . There are two key ingredients : 1 ) Choosing a geometric property from the original data space to be preserved . For example , the local euclidean neighborhood ( Belkin & Niyogi , 2003 ) , or linear interpolation by neighboring data points ( Roweis & Saul , 2000 ) . 2 ) The embedding should not collapse to a trivial solution . To avoid the trivial solution , the variance of the embedding space is typically constrained in spectral-based manifold learning methods . Manifold Clustering and Embedding . When the data should be modeled as a union of several manifolds , manifold clustering is needed in addition to manifold learning . When these manifolds are linear , subspace clustering algorithms ( Ma et al. , 2007 ; Elhamifar & Vidal , 2013 ; Vidal et al. , 2016 ) can be used . When they are non-linear , manifold clustering and embedding methods were proposed . They generally divide into 3 categories ( Abdolali & Gillis , 2021 ) : 1 . Locality preserving . 2 . Kernel based . 3 . Neural Network based . Locality preserving techniques implicitly make the assumption that the manifolds are smooth , and are sampled densely ( Souvenir & Pless , 2005 ; Elhamifar & Vidal , 2011 ; Chen et al. , 2018 ) . Additionally , smoothness assumption can be employed ( Gong et al. , 2012 ) . Our method generalizes those techniques by realizing them with geometrical constraints . The success of kernel based techniques depends strongly on the suitability of the underlying kernel , and generally requires a representation of the data in a space with higher dimension than the data space ( Patel & Vidal , 2014 ) . Deep subspace clustering methods , such as Ji et al . ( 2017 ) ; Zhang et al . ( 2019 ) ; Zhou et al . ( 2018 ) jointly perform linear subspace clustering and representation of the data , and has the potential to handle high dimensional data effectively . However , it has been shown that most performance gains obtained by those methods should be attributed to an ad-hoc post-processing step applied to the self-expression matrix . Using neural networks only provide a very marginal gain compared to clustering the raw data directly using linear SC ( Haeffele et al. , 2020 ) . Our work differs from those techniques mainly in two aspects : i ) While most of the previous methods were generative ( autoencoders , GANs ) , our loss function is defined in the latent embedding space and is best understood as a contrastive method . ii ) While previous methods use self-expression based SC to guide feature learning , ours uses MCR2 to learn the subspace features . Recently , some deep SC techniques also applied data augmentation ( Sun et al. , 2019 ; Abavisani et al. , 2020 ) . However , in those works , data augmentation played a complementary role of improving the performance . In our method , data augmentation plays a central role for enabling the identification of the clusters . Self-Supervised Representation Learning . Recently , self-supervised representation learning achieved tremendous success with deep neural networks . Similar to manifold clustering and embedding , there are also two essential ingredients : 1 ) Data augmentations are used to define the domain-specific invariance . 2 ) The latent representation should not collapse . The second requirement can be achieved either by contrastive learning ( Chen et al. , 2020 ) , momentum encoder ( He et al. , 2020 ; Grill et al. , 2020 ) or siamese network structure ( Chen & He , 2021 ) . More directly related to our work is Tao et al . ( 2021 ) , which proposed feature orthogonalization and decorrelation , alongside contrastive learning . Recently , variance regularization along was also successfully used to achieve principle 2 ) ( Zbontar et al. , 2021 ; Bardes et al. , 2021 ) , attaining state-of-the-art SSL representation performance . Part of our method , the total coding rate ( TCR ) objective achieves a similar effect , see discussion in Appendix A.6 . However , beyond self-supervised features , our algorithm additionally show strong clustering performance , and directly learns a meaningful latent space . The simultaneous manifold clustering and embedding in NMCE is also related to online deep clustering ( Caron et al. , 2018 ; 2020 ) method . Also notable is Deshmukh et al . ( 2021 ) , where the concept of population consistency is closely related to the constraint functional we discussed . Clustering with Data Augmentation Our method use data augmentation to ensure correct clustering of the training data . Although not explicitly pointed out , this is also the case for other clustering techniques ( Shen et al. , 2021 ; Tsai et al. , 2020 ; Li et al. , 2021 ) . Our understanding of data augmentation is also consistent to works that specifically study the success of data augmentations ( HaoChen et al. , 2021 ; von Kügelgen et al. , 2021 ) . 3 NEURAL MANIFOLD CLUSTERING AND EMBEDDING . 3.1 PROBLEM SETUP . Let ’ s assume data points xi ∈ Rd are sampled from a union ⋃n j=1 Xj of manifolds X1 , X2 , ... , Xn . 1 As stated in Task 1 , the goal of manifold clustering and embedding is to assign each data point to the corresponding manifold ( clustering ) , as well as learning a coordinate for each manifold ( manifold learning ) . To achieve this goal , we use neural network f , which learns to map a data point x to the feature embedding z ∈ Rdemb and the cluster assignment c ∈ [ 1 , n ] , i.e . z , c = f ( x ) . The cluster assignment shall be equal to the ground-truth manifold assignment,2 z should parameterize the coordinates of the corresponding manifold Xc . To make the feature space easier to work with , one can enforce the additional separability requirement that for any Xj , Xk and j 6= k , the feature vectors are perpendicular , Zj ⊥ Zk . Here Zj denotes the embedding feature vectors of data points in Xj , and we define perpendicular between two sets in the following fashion : If Z̃j ⊆ Zj , Z̃k ⊆ Zk such that ∀zj ∈ Z̃j , zk ∈ Z̃k , we have zj · zk 6= 0 , then either Z̃j or Z̃k has zero measure . In the following , we first argue that to make clustering possible with a neural network , one should define the additional geometric constraint that makes the manifold clusters identifiable . Second , we discuss how to implement the required geometric constraints and combine them with a recently proposed joint clustering and subspace learning algorithm MCR2 ( Yu et al. , 2020 ) to achieve neural manifold clustering and embeddng ( NMCE ) . 3.2 CLUSTERING ALWAYS INVOLVES IMPLICIT ASSUMPTIONS . Even the simplest clustering algorithms rely on implicit assumptions . For example , in k-means clustering , the implicit assumption is that the clusters in the original data space are continuous in terms of L2 distance . For Linear SC , the assumption is that data points are co-linear within each cluster . If a neural network is trained on example data to learn a cluster assignment function 1We do not consider topological issues here and assume that all of them are homeomorphic to Rdi for some di 2Up to a permutation since the training is unsupervised . c = f ( x ) , the resulting clustering will be arbitrary and not resemble the solution of k-means or linear SC clustering . This is because neural networks are flexible and no constraint is imposed on the clustering function to force the result to respect the geometry of the original data . One example of this is shown in Figure 1 left panel , where a deep clustering method outputs a rather arbitrary clustering for the double spiral data . To make the clusters learnable for a neural network , one needs to introduce constraints explicitly . In the example in Figure 1 , one can easily reason that , to be able to separate the two spirals from each other , the clustering function needs to ensure that all close neighbors of a point from the data distribution are assigned to the same cluster , essentially the same assumption implicitly made in locality-based manifold clustering algorithms ( Abdolali & Gillis , 2021 ) . We formalize the notion of constraints to a constraint functional D ( f ) ( for a specific data distribution ) that has the following property : All cluster assignment functions f that makes D attain its minimum value ( assumed to be 0 ) and also cluster data points to the ground truth number of clusters will correctly cluster all data points . For example , we can construct a functional D that takes value D = 0 for all clustering functions that satisfy the locality constraint on the dataset , and D > 0 otherwise . This notion of constraint function is completely general . For example , linear subspace clustering is recovered if the constraint function take value 0 if and only if the found clusters are linear subspace . In practice , one usually can not optimize the neural network clustering function f subject to D = 0 , since the correct solution would have to be found at initialization . A more practical way to use the constraint function is to use the relaxed objective with weighting λ : L ( f ) = Lclst ( f ) + λ ∗D ( f ) ( 1 ) where Lclst is some objective function that will force f to cluster the dataset . With a suitable λ , optimizing this objective leads to learning of the correct clusters , by the assumption above . To achieve manifold clustering , one just needs to find the appropriate constraint functional . | Authors developed a method and training procedure for manifold clustering problems. The proposed solution inspired from information theoretic methods namely maximum rate reduction. Authors also support their claims with empirical results. | SP:d6c1148b4f3044ea3ff8a2dc07981a539fd9ee34 |
Neural Manifold Clustering and Embedding | Given a union of non-linear manifolds , non-linear subspace clustering or manifold clustering aims to cluster data points based on manifold structures and also learn to parameterize each manifold as a linear subspace in a feature space . Deep neural networks have the potential to achieve this goal under highly non-linear settings given their large capacity and flexibility . We argue that achieving manifold clustering with neural networks requires two essential ingredients : a domain-specific constraint that ensures the identification of the manifolds , and a learning algorithm for embedding each manifold to a linear subspace in the feature space . This work shows that many constraints can be implemented by data augmentation . For subspace feature learning , Maximum Coding Rate Reduction ( MCR2 ) objective can be used . Putting them together yields Neural Manifold Clustering and Embedding ( NMCE ) , a novel method for general purpose manifold clustering , which significantly outperforms autoencoder-based deep subspace clustering and achieve state-of-the-art performance on several important benchmarks . Further , on more challenging natural image datasets , NMCE can also outperform other algorithms specifically designed for clustering . Qualitatively , we demonstrate that NMCE learns a meaningful and interpretable feature space . As the formulation of NMCE is closely related to several important Self-supervised learning ( SSL ) methods , we believe this work can help us build a deeper understanding on SSL representation learning . 1 INTRODUCTION . Here we investigate unsupervised representation learning , which aims to learn structures ( features ) from data without the use of any label . If the data lie in a linear subspace , the linear subspace can be extracted by Principal Component Analysis ( PCA ) ( Jolliffe , 1986 ) , one of the most basic forms of unsupervised learning . When the data occupy a union of several linear subspaces , subspace clustering ( SC ) ( Vidal et al. , 2016 ) is needed to cluster each data point to a subspace as well as estimating the parameters of each subspace . Here we are concerned with even more challenging scenarios , when data points come from a union of several non-linear low-dimensional manifolds . In such scenarios , the clustering problem can be formulated as follows ( Elhamifar & Vidal , 2011 ) : Task 1 . Manifold Clustering and Embedding : Given that the data points come from a union of lowdimensional manifolds , we shall segment the data points based on their corresponding manifolds , and obtain a low-dimensional embedding for each manifold . Various methods have been developed to solve this problem ( Abdolali & Gillis , 2021 ) , but it is still an open question how to use neural networks effectively in manifold clustering problems ( Haeffele et al. , 2020 ) . In this paper , we propose neural manifold clustering and embedding ( NMCE ) that follows three principles : 1 ) The clustering and representation should respect a domain-specific constraint , e.g . local neighborhoods , local linear interpolation or data augmentation invariances . 2 ) The embedding of a particular manifold shall not collapse . 3 ) The embedding of identified manifolds shall be linearized and separated , i.e . they occupy different linear subspaces . We achieve 1 ) using data augmentations , and achieve 2 ) and 3 ) with the subspace feature learning algorithm maximal coding rate reduction ( MCR2 ) objective ( Yu et al. , 2020 ) . This work make the following specific contributions : 1 . We combine data augmentation with MCR2 to yield a novel algorithm for general purpose manifold clustering and embedding ( NMCE ) . We also discuss connections between the algorithm and self-supervised contrastive learning . 2 . We demonstrate that NMCE achieves state-of-the-art performance on standard subspace clustering benchmarks , and can outperform the best clustering algorithms on more challenging high-dimensional image datasets like CIFAR-10 and CIFAR-20 . Further , empirical evaluation suggests that our algorithm also learns a meaningful feature space . 2 RELATED WORK . Manifold Learning . In classical manifold learning , the goal is to map the manifold-structured data points to a low-dimensional representation space such that the manifold structure is preserved . There are two key ingredients : 1 ) Choosing a geometric property from the original data space to be preserved . For example , the local euclidean neighborhood ( Belkin & Niyogi , 2003 ) , or linear interpolation by neighboring data points ( Roweis & Saul , 2000 ) . 2 ) The embedding should not collapse to a trivial solution . To avoid the trivial solution , the variance of the embedding space is typically constrained in spectral-based manifold learning methods . Manifold Clustering and Embedding . When the data should be modeled as a union of several manifolds , manifold clustering is needed in addition to manifold learning . When these manifolds are linear , subspace clustering algorithms ( Ma et al. , 2007 ; Elhamifar & Vidal , 2013 ; Vidal et al. , 2016 ) can be used . When they are non-linear , manifold clustering and embedding methods were proposed . They generally divide into 3 categories ( Abdolali & Gillis , 2021 ) : 1 . Locality preserving . 2 . Kernel based . 3 . Neural Network based . Locality preserving techniques implicitly make the assumption that the manifolds are smooth , and are sampled densely ( Souvenir & Pless , 2005 ; Elhamifar & Vidal , 2011 ; Chen et al. , 2018 ) . Additionally , smoothness assumption can be employed ( Gong et al. , 2012 ) . Our method generalizes those techniques by realizing them with geometrical constraints . The success of kernel based techniques depends strongly on the suitability of the underlying kernel , and generally requires a representation of the data in a space with higher dimension than the data space ( Patel & Vidal , 2014 ) . Deep subspace clustering methods , such as Ji et al . ( 2017 ) ; Zhang et al . ( 2019 ) ; Zhou et al . ( 2018 ) jointly perform linear subspace clustering and representation of the data , and has the potential to handle high dimensional data effectively . However , it has been shown that most performance gains obtained by those methods should be attributed to an ad-hoc post-processing step applied to the self-expression matrix . Using neural networks only provide a very marginal gain compared to clustering the raw data directly using linear SC ( Haeffele et al. , 2020 ) . Our work differs from those techniques mainly in two aspects : i ) While most of the previous methods were generative ( autoencoders , GANs ) , our loss function is defined in the latent embedding space and is best understood as a contrastive method . ii ) While previous methods use self-expression based SC to guide feature learning , ours uses MCR2 to learn the subspace features . Recently , some deep SC techniques also applied data augmentation ( Sun et al. , 2019 ; Abavisani et al. , 2020 ) . However , in those works , data augmentation played a complementary role of improving the performance . In our method , data augmentation plays a central role for enabling the identification of the clusters . Self-Supervised Representation Learning . Recently , self-supervised representation learning achieved tremendous success with deep neural networks . Similar to manifold clustering and embedding , there are also two essential ingredients : 1 ) Data augmentations are used to define the domain-specific invariance . 2 ) The latent representation should not collapse . The second requirement can be achieved either by contrastive learning ( Chen et al. , 2020 ) , momentum encoder ( He et al. , 2020 ; Grill et al. , 2020 ) or siamese network structure ( Chen & He , 2021 ) . More directly related to our work is Tao et al . ( 2021 ) , which proposed feature orthogonalization and decorrelation , alongside contrastive learning . Recently , variance regularization along was also successfully used to achieve principle 2 ) ( Zbontar et al. , 2021 ; Bardes et al. , 2021 ) , attaining state-of-the-art SSL representation performance . Part of our method , the total coding rate ( TCR ) objective achieves a similar effect , see discussion in Appendix A.6 . However , beyond self-supervised features , our algorithm additionally show strong clustering performance , and directly learns a meaningful latent space . The simultaneous manifold clustering and embedding in NMCE is also related to online deep clustering ( Caron et al. , 2018 ; 2020 ) method . Also notable is Deshmukh et al . ( 2021 ) , where the concept of population consistency is closely related to the constraint functional we discussed . Clustering with Data Augmentation Our method use data augmentation to ensure correct clustering of the training data . Although not explicitly pointed out , this is also the case for other clustering techniques ( Shen et al. , 2021 ; Tsai et al. , 2020 ; Li et al. , 2021 ) . Our understanding of data augmentation is also consistent to works that specifically study the success of data augmentations ( HaoChen et al. , 2021 ; von Kügelgen et al. , 2021 ) . 3 NEURAL MANIFOLD CLUSTERING AND EMBEDDING . 3.1 PROBLEM SETUP . Let ’ s assume data points xi ∈ Rd are sampled from a union ⋃n j=1 Xj of manifolds X1 , X2 , ... , Xn . 1 As stated in Task 1 , the goal of manifold clustering and embedding is to assign each data point to the corresponding manifold ( clustering ) , as well as learning a coordinate for each manifold ( manifold learning ) . To achieve this goal , we use neural network f , which learns to map a data point x to the feature embedding z ∈ Rdemb and the cluster assignment c ∈ [ 1 , n ] , i.e . z , c = f ( x ) . The cluster assignment shall be equal to the ground-truth manifold assignment,2 z should parameterize the coordinates of the corresponding manifold Xc . To make the feature space easier to work with , one can enforce the additional separability requirement that for any Xj , Xk and j 6= k , the feature vectors are perpendicular , Zj ⊥ Zk . Here Zj denotes the embedding feature vectors of data points in Xj , and we define perpendicular between two sets in the following fashion : If Z̃j ⊆ Zj , Z̃k ⊆ Zk such that ∀zj ∈ Z̃j , zk ∈ Z̃k , we have zj · zk 6= 0 , then either Z̃j or Z̃k has zero measure . In the following , we first argue that to make clustering possible with a neural network , one should define the additional geometric constraint that makes the manifold clusters identifiable . Second , we discuss how to implement the required geometric constraints and combine them with a recently proposed joint clustering and subspace learning algorithm MCR2 ( Yu et al. , 2020 ) to achieve neural manifold clustering and embeddng ( NMCE ) . 3.2 CLUSTERING ALWAYS INVOLVES IMPLICIT ASSUMPTIONS . Even the simplest clustering algorithms rely on implicit assumptions . For example , in k-means clustering , the implicit assumption is that the clusters in the original data space are continuous in terms of L2 distance . For Linear SC , the assumption is that data points are co-linear within each cluster . If a neural network is trained on example data to learn a cluster assignment function 1We do not consider topological issues here and assume that all of them are homeomorphic to Rdi for some di 2Up to a permutation since the training is unsupervised . c = f ( x ) , the resulting clustering will be arbitrary and not resemble the solution of k-means or linear SC clustering . This is because neural networks are flexible and no constraint is imposed on the clustering function to force the result to respect the geometry of the original data . One example of this is shown in Figure 1 left panel , where a deep clustering method outputs a rather arbitrary clustering for the double spiral data . To make the clusters learnable for a neural network , one needs to introduce constraints explicitly . In the example in Figure 1 , one can easily reason that , to be able to separate the two spirals from each other , the clustering function needs to ensure that all close neighbors of a point from the data distribution are assigned to the same cluster , essentially the same assumption implicitly made in locality-based manifold clustering algorithms ( Abdolali & Gillis , 2021 ) . We formalize the notion of constraints to a constraint functional D ( f ) ( for a specific data distribution ) that has the following property : All cluster assignment functions f that makes D attain its minimum value ( assumed to be 0 ) and also cluster data points to the ground truth number of clusters will correctly cluster all data points . For example , we can construct a functional D that takes value D = 0 for all clustering functions that satisfy the locality constraint on the dataset , and D > 0 otherwise . This notion of constraint function is completely general . For example , linear subspace clustering is recovered if the constraint function take value 0 if and only if the found clusters are linear subspace . In practice , one usually can not optimize the neural network clustering function f subject to D = 0 , since the correct solution would have to be found at initialization . A more practical way to use the constraint function is to use the relaxed objective with weighting λ : L ( f ) = Lclst ( f ) + λ ∗D ( f ) ( 1 ) where Lclst is some objective function that will force f to cluster the dataset . With a suitable λ , optimizing this objective leads to learning of the correct clusters , by the assumption above . To achieve manifold clustering , one just needs to find the appropriate constraint functional . | This work proposed a general manifold clustering algorithm called Neural Manifold Clustering and Embedding (NMCE), which utilize Maximum Coding Rate Reduction (MCR2 ) as the objective function and data augmentation to enforce constrains. In the implementation stage, given that even the toy experiment is difficult to optimize with the full NMCE objective, a multistage training procedure is applied with the first stage actually trying to optimize the Total Coding Rate (TCR), which is another kind of self-supervised learning objective claimed in this paper. On synthetic and real-world datasets COIL20, COIL100, CIFAR-10, CIFAR-20 and STL-10, NMCE achieved comparable and sometimes better results, compared to baseline methods or some alternative manifold clustering methods listed in the paper. | SP:d6c1148b4f3044ea3ff8a2dc07981a539fd9ee34 |
Graph Tree Neural Networks | 1 INTRODUCTION . In deep learning , various architectures have been designed such as CNN , RNN , and GNN ( LeCun et al. , 1989 ; Hopfield , 1982 ; Scarselli et al. , 2008 ; Wu et al. , 2020 ) . These networks show good performance in various fields such as image , text , and sound etc . However , most of the existing studies are focused on a specific dataset or task . And there are many tasks that humans can do but are difficult to perform with neural networks . First , some of the difficulties with imitating human neural networks are introduced below . ( C-i ) Sensory organs that process information for each data type exist at several starting points . ( C-ii ) A tree can not express the relationship between sibling nodes , and it is difficult to define the graph ’ s starting point and ending point . ( C-iii ) The structure that receives information is hierarchical and freely integrated with various structures . But existing networks are designed to learn only specific tasks or datasets because layers are fixed . ( C-iv ) The number of activated neurons and the processing depths differ depending on the data type and complexity . And sometimes the information is entered and sometimes not . ( C-v ) Human neurons are very numerous and process various types of data . C- [ · ] denotes the characteristics . ( C-i ) Humans have various sensory organs from which they receive information such as sight , hearing , and smell , and transfer it to the cerebral cortex using different information-processing organs . The optic nerve also begins with the visual cortex of the occipital lobe , and the auditory nerve begins with the auditory cortex of the temporal lobe ; this means that the two paths have different starting points . ( C-ii ) A graph can describe the relationship depending on whether all nodes are connected , but it is not easy to define the start and end points . A tree is a data structure that can define direction with the leaf and root nodes . On the other hand , a tree structure can not express the relationship between sibling nodes . ( C-iii ) Information from different sensory organs is gathered in the association area . Typically , the posterior parietal cortex ( Malach et al. , 1995 ) is located at the top of the head as part of the parietal lobe , and sensory information such as vision and hearing is fused and interpreted ; Then , information is coordinated in the frontal lobe and humans act . ( C-iv ) There is also an over-fitting problem in DNN ( Widrow et al. , 1960 ) , and various attempts have been made to solve this in ( Srivastava et al. , 2014 ) . In DNN , simply deeper layers lead to an over-fitting problem ( Dai et al. , 2017 ) . In particular , NasNet ( Zoph et al. , 2018 ) has attempted to design a layer automatically through reinforcement learning and RNN . This means that there is an appropriate network depth depending on the complexity of the dataset ; we want to design a network that could adjust the depth of the layers according to the complexity of each data , not the data set . ( C-v ) Also , the number of human neurons is about 85 billion ( Herculano-Houzel , 2009 ) , which is difficult to express in a network . Therefore , GTNNs are designed to solve these problems . A- [ · ] denotes our approach of C- [ · ] . ( A-i ) This network can perform information processing , such as that done by sensory organs before their signals are entered into the cerebral cortex . In this paper , we express this sensory organ as a feature-extraction process ( sec.2.2 ) . Type information is stored with the data and converted into an input vector by different feature extraction processes for each type . ( A-ii ) We propose a new data structure called Graph Tree Node ( GTN ) and Graph Tree ( GT ) . A graph tree can be expressed using leaf and root nodes as the starting and ending points , as well as the relationship among sibling nodes . In addition , we can modify the existing tree dataset at no high cost . ( A-iii ) subtrees of different types are merged and make a final decision at the root node ; this structure is difficult to express with the existing forward-learning method . Therefore , we used a recursive-learning method ( Goller & Kuchler , 1996 ) . This methodology has been effective in analyzing the semantics of programming source code or natural language ( Socher et al. , 2013 ; Mou et al. , 2014 ) . We call the recursive-convolution methodology used in GTNNs the depth-first convolution ( DFC ) methodology ( sec.2.3 ) . DFC is a convolution method in which subtrees originating from different Leaf nodes are integrated into a bottom-up approach and represent hierarchical and relational information . GTNNs can simultaneously learn various types of datasets through the recursive learning method because the graph tree data structure can express various model structures . ( A-iv ) GTNNs are data-driven learning in which the number of convolutions varies according to the depth of the tree . Instead of using fixed sequence layers , we create a graph tree for each data and learn according to the tree ’ s structure . If we modify this network further , we can adjust the depth according to the type and complexity of each data . In addition , We introduce two models of GTNNs called graph tree convolutional networks ( GTC ) and graph tree recursive networks ( GTR ) , and these models are mathematically related to MLP ( Hinton et al. , 2006 ) and RNN . The Fully Connected layer ( FC layer ) in MLP as Level Layer of GTC , and Time in RNN as depth of GTR on a special case . ( A-v ) The number of human neurons is enormous , and it is difficult to express all of their structures in a network ; on the other hand , this network can represent multiple neurons because it can perform recursive end-to-end learning according to the amount of information . This network theory holds that `` units of information have a relationship in the form of a graph , then become a bigger unit of information , and have a relationship with other units of information . At this point , the unit of information is a set of neurons , and we can express it as a vector with GTNN . '' To demonstrate the performance of this network , we conducted three experiments . And we used several benchmark datasets ( image ( MNIST ( LeCun et al. , 1998 ) ) , sound ( Speech Commands ( Warden , 2018 ) ) , text ( IMDB ( Maas et al. , 2011 ) , SST ( Socher et al. , 2013 ) ) , graph ( Dou et al. , 2021 ) ) . In the first experiment , It verifies whether feature extraction networks and association networks can be learned together . we compared the performance of existing networks that separately learned image ( LeNet-5 ( LeCun et al. , 1989 ) , sound ( M5 ( Dai et al. , 2017 ) ) , text ( CNN ( Kim , 2014 ) ) with the performance simultaneously learned by connecting these networks for feature extraction into GTNN . The second experiment contained one or three types of data into a GT , and GTNNs learned these GTs . Then , we verified whether the output contained all the information inside the GT . Then , we conducted a verification experiment on whether the network ’ s output contained all information on the tree . In the third experiment , we verifies whether data of various structures ( image , sound , tree , graph ) can be learned and checked the plot of the learning process in appendix E. As a result , the network was learned without significant degradation in performance compared to when learning existing networks separately , and all information on the tree could be embedded as a vector . 2 GRAPH TREE NEURAL NETWORKS . In this section , we proposes graph tree architecture . The architecture consists of three parts ( i ) Defining the data structure of graph tree node ( GTN ) and designing the graph tree structure ( GT ) . ( ii ) Defining the feature-extraction ( FE ) process . ( iii ) Defining the GTNN model . 2.1 GRAPH TREE DATA STRUCTURE . The graph tree ( GT ) is a data structure that can express relational and hierarchical information . x : ( Input ) – we can store the data such as images , sound , text , or tabular to each node . τ : ( Type ) – This refers to the type of information matching the input x. the feature-extraction function ( ψ ) is selected by τ like Eqn.1 . Ac : ( Children Adjacency Matrix ) – This refers to the relationship information that exists in GTN . the number of children is N and we can express it as Ac ∈ RN×N . C : ( Children ) – This refers to the child nodes of GTN matching to the node of Ac . If there is no matching child , it is replaced with GTN∅ , which carries the initial hidden state −→ 0 instead . We can express this as C = { GTN1 , GTN2 , .. GTNN } . We can express GTN = { x , τ , Ac , C } , GTNi ∈ GT . GTNroot denotes the root node of GT . The reason for defining the relationship among child nodes is the convenience of implementation . If we define GTN in the above way , we can convert the tree dataset into the GT dataset without significantly modifying the tree structure that has previously been useful . 2.2 FEATURE EXTRACTION NETWORKS & TYPE BIAS . −→x = Ψ ( x , τ ) = [ ψτ ( x ) , onehot ( τ ) ] ( 1 ) The x ( input ) and τ ( type ) exist together in GTN , and the feature extraction process of x is selected by τ as Eqn 1 . We expressed the function of feature extraction as Ψ , and this function converts x to −→x . And the one-hot vector has the effect of having a different bias value for each type ( τ ) . The weight parameter corresponding to the type one-hot vector means the bias value for the corresponding type , and the activation threshold value is adjusted for each type-bias . It can represent data type information and that it is transmitted from which nerve . In addition , we do not need to perform adding operations ( + ) for bias . Therefore , the empty space of GT can be used as a zero-vector when batch learning is performed . And the dictionary structure is very useful for batch learning ; we attached the methodology to appendix.A . This methodology divides mini-batch samples by type , and the type-mini-batch size varies each time . Therefore , we recommend using a weight standardization ( Qiao et al. , 2019 ) or group normalization ( Wu & He , 2018 ) to avoid batch normalization ( Ioffe & Szegedy , 2015 ) affected by the batch size . We described the details of the network we used in appendix.C . | The paper introduces a new neural network architecture called a "Graph Tree Neural Network". This architecture is inspired by several properties of the human brain: 1. it's multi-modal; 2. it allows for cross-input reasoning; 3. it can adapt its internal structure to the current input; 4. it can adapt the amount of computation per input; and 5. the human brain is larger than most neural networks. The architecture itself relies on mode-specific representation components: later, in the experiments, they introduce targeted architectures for images, speech and text. These representations are then consumes by a convolutional logic that allows full communication within the different components of the network. Finally, these components can be recursively combined to yield a full graph tree neural network. This architecture is then evaluated on several domains covering speech, vision and nlp datasets, showing consistent improvement over the selected baselines. | SP:bfc029f8ae6271d156f507cf3886bd4079a91d55 |
Graph Tree Neural Networks | 1 INTRODUCTION . In deep learning , various architectures have been designed such as CNN , RNN , and GNN ( LeCun et al. , 1989 ; Hopfield , 1982 ; Scarselli et al. , 2008 ; Wu et al. , 2020 ) . These networks show good performance in various fields such as image , text , and sound etc . However , most of the existing studies are focused on a specific dataset or task . And there are many tasks that humans can do but are difficult to perform with neural networks . First , some of the difficulties with imitating human neural networks are introduced below . ( C-i ) Sensory organs that process information for each data type exist at several starting points . ( C-ii ) A tree can not express the relationship between sibling nodes , and it is difficult to define the graph ’ s starting point and ending point . ( C-iii ) The structure that receives information is hierarchical and freely integrated with various structures . But existing networks are designed to learn only specific tasks or datasets because layers are fixed . ( C-iv ) The number of activated neurons and the processing depths differ depending on the data type and complexity . And sometimes the information is entered and sometimes not . ( C-v ) Human neurons are very numerous and process various types of data . C- [ · ] denotes the characteristics . ( C-i ) Humans have various sensory organs from which they receive information such as sight , hearing , and smell , and transfer it to the cerebral cortex using different information-processing organs . The optic nerve also begins with the visual cortex of the occipital lobe , and the auditory nerve begins with the auditory cortex of the temporal lobe ; this means that the two paths have different starting points . ( C-ii ) A graph can describe the relationship depending on whether all nodes are connected , but it is not easy to define the start and end points . A tree is a data structure that can define direction with the leaf and root nodes . On the other hand , a tree structure can not express the relationship between sibling nodes . ( C-iii ) Information from different sensory organs is gathered in the association area . Typically , the posterior parietal cortex ( Malach et al. , 1995 ) is located at the top of the head as part of the parietal lobe , and sensory information such as vision and hearing is fused and interpreted ; Then , information is coordinated in the frontal lobe and humans act . ( C-iv ) There is also an over-fitting problem in DNN ( Widrow et al. , 1960 ) , and various attempts have been made to solve this in ( Srivastava et al. , 2014 ) . In DNN , simply deeper layers lead to an over-fitting problem ( Dai et al. , 2017 ) . In particular , NasNet ( Zoph et al. , 2018 ) has attempted to design a layer automatically through reinforcement learning and RNN . This means that there is an appropriate network depth depending on the complexity of the dataset ; we want to design a network that could adjust the depth of the layers according to the complexity of each data , not the data set . ( C-v ) Also , the number of human neurons is about 85 billion ( Herculano-Houzel , 2009 ) , which is difficult to express in a network . Therefore , GTNNs are designed to solve these problems . A- [ · ] denotes our approach of C- [ · ] . ( A-i ) This network can perform information processing , such as that done by sensory organs before their signals are entered into the cerebral cortex . In this paper , we express this sensory organ as a feature-extraction process ( sec.2.2 ) . Type information is stored with the data and converted into an input vector by different feature extraction processes for each type . ( A-ii ) We propose a new data structure called Graph Tree Node ( GTN ) and Graph Tree ( GT ) . A graph tree can be expressed using leaf and root nodes as the starting and ending points , as well as the relationship among sibling nodes . In addition , we can modify the existing tree dataset at no high cost . ( A-iii ) subtrees of different types are merged and make a final decision at the root node ; this structure is difficult to express with the existing forward-learning method . Therefore , we used a recursive-learning method ( Goller & Kuchler , 1996 ) . This methodology has been effective in analyzing the semantics of programming source code or natural language ( Socher et al. , 2013 ; Mou et al. , 2014 ) . We call the recursive-convolution methodology used in GTNNs the depth-first convolution ( DFC ) methodology ( sec.2.3 ) . DFC is a convolution method in which subtrees originating from different Leaf nodes are integrated into a bottom-up approach and represent hierarchical and relational information . GTNNs can simultaneously learn various types of datasets through the recursive learning method because the graph tree data structure can express various model structures . ( A-iv ) GTNNs are data-driven learning in which the number of convolutions varies according to the depth of the tree . Instead of using fixed sequence layers , we create a graph tree for each data and learn according to the tree ’ s structure . If we modify this network further , we can adjust the depth according to the type and complexity of each data . In addition , We introduce two models of GTNNs called graph tree convolutional networks ( GTC ) and graph tree recursive networks ( GTR ) , and these models are mathematically related to MLP ( Hinton et al. , 2006 ) and RNN . The Fully Connected layer ( FC layer ) in MLP as Level Layer of GTC , and Time in RNN as depth of GTR on a special case . ( A-v ) The number of human neurons is enormous , and it is difficult to express all of their structures in a network ; on the other hand , this network can represent multiple neurons because it can perform recursive end-to-end learning according to the amount of information . This network theory holds that `` units of information have a relationship in the form of a graph , then become a bigger unit of information , and have a relationship with other units of information . At this point , the unit of information is a set of neurons , and we can express it as a vector with GTNN . '' To demonstrate the performance of this network , we conducted three experiments . And we used several benchmark datasets ( image ( MNIST ( LeCun et al. , 1998 ) ) , sound ( Speech Commands ( Warden , 2018 ) ) , text ( IMDB ( Maas et al. , 2011 ) , SST ( Socher et al. , 2013 ) ) , graph ( Dou et al. , 2021 ) ) . In the first experiment , It verifies whether feature extraction networks and association networks can be learned together . we compared the performance of existing networks that separately learned image ( LeNet-5 ( LeCun et al. , 1989 ) , sound ( M5 ( Dai et al. , 2017 ) ) , text ( CNN ( Kim , 2014 ) ) with the performance simultaneously learned by connecting these networks for feature extraction into GTNN . The second experiment contained one or three types of data into a GT , and GTNNs learned these GTs . Then , we verified whether the output contained all the information inside the GT . Then , we conducted a verification experiment on whether the network ’ s output contained all information on the tree . In the third experiment , we verifies whether data of various structures ( image , sound , tree , graph ) can be learned and checked the plot of the learning process in appendix E. As a result , the network was learned without significant degradation in performance compared to when learning existing networks separately , and all information on the tree could be embedded as a vector . 2 GRAPH TREE NEURAL NETWORKS . In this section , we proposes graph tree architecture . The architecture consists of three parts ( i ) Defining the data structure of graph tree node ( GTN ) and designing the graph tree structure ( GT ) . ( ii ) Defining the feature-extraction ( FE ) process . ( iii ) Defining the GTNN model . 2.1 GRAPH TREE DATA STRUCTURE . The graph tree ( GT ) is a data structure that can express relational and hierarchical information . x : ( Input ) – we can store the data such as images , sound , text , or tabular to each node . τ : ( Type ) – This refers to the type of information matching the input x. the feature-extraction function ( ψ ) is selected by τ like Eqn.1 . Ac : ( Children Adjacency Matrix ) – This refers to the relationship information that exists in GTN . the number of children is N and we can express it as Ac ∈ RN×N . C : ( Children ) – This refers to the child nodes of GTN matching to the node of Ac . If there is no matching child , it is replaced with GTN∅ , which carries the initial hidden state −→ 0 instead . We can express this as C = { GTN1 , GTN2 , .. GTNN } . We can express GTN = { x , τ , Ac , C } , GTNi ∈ GT . GTNroot denotes the root node of GT . The reason for defining the relationship among child nodes is the convenience of implementation . If we define GTN in the above way , we can convert the tree dataset into the GT dataset without significantly modifying the tree structure that has previously been useful . 2.2 FEATURE EXTRACTION NETWORKS & TYPE BIAS . −→x = Ψ ( x , τ ) = [ ψτ ( x ) , onehot ( τ ) ] ( 1 ) The x ( input ) and τ ( type ) exist together in GTN , and the feature extraction process of x is selected by τ as Eqn 1 . We expressed the function of feature extraction as Ψ , and this function converts x to −→x . And the one-hot vector has the effect of having a different bias value for each type ( τ ) . The weight parameter corresponding to the type one-hot vector means the bias value for the corresponding type , and the activation threshold value is adjusted for each type-bias . It can represent data type information and that it is transmitted from which nerve . In addition , we do not need to perform adding operations ( + ) for bias . Therefore , the empty space of GT can be used as a zero-vector when batch learning is performed . And the dictionary structure is very useful for batch learning ; we attached the methodology to appendix.A . This methodology divides mini-batch samples by type , and the type-mini-batch size varies each time . Therefore , we recommend using a weight standardization ( Qiao et al. , 2019 ) or group normalization ( Wu & He , 2018 ) to avoid batch normalization ( Ioffe & Szegedy , 2015 ) affected by the batch size . We described the details of the network we used in appendix.C . | In this paper, the authors propose Graph Tree Neural Network (GTNN) a new learning model that is structured as a graph tree (i.e., a tree with links between siblings) where each node has to process the data coming from its children. GTNN can take as input data of various types where each node will be processing the data differently according to the data type. The authors describe how (de)convolution and recurrent neural networks can be achieved within GTNN. Experiments were conducted on some classic deep learning datasets of three different data types: images (MNIST), language (IMDB), and sound (Speech Command). The goals of those experiments were two see whether GTNN can process various datasets of different types and if GTNN can produce an output vector that retains all the information coming from the different data types. | SP:bfc029f8ae6271d156f507cf3886bd4079a91d55 |
Object Pursuit: Building a Space of Objects via Discriminative Weight Generation | 1 INTRODUCTION . What are human infants and toddlers learning while they are manipulating a discovered object ? And , how do such continual interaction and learning experiences , i.e. , objects are discovered and learned one by one , help develop the capability to understand the scenes that consist of individual objects ? Inspired by these questions , we aim for training frameworks that enable an autonomous agent to continuously learn object-centric representations through self-supervised discovery and manipulation of objects , so that the agent can later use the learned representations for visual scene understanding . A majority of object-centric representation learning methods focus on encoding images or video clips into disentangled latent codes , each of which explains an entity in the scene , and together they should reconstruct the input . However , without explicit supervision and more sophisticated inductive biases beyond parsimony , the disentanglement usually has difficulties aligning with objects , especially for complex scenes . We leverage the fact that an autonomous agent can actively explore the scene , and propose that the data collected by manipulating a discovered object can serve as an important source for building inductive biases for object-level disentanglement . In our proposed framework , whenever an object is discovered by the agent , a dataset containing images and instance masks of this object can easily be sampled via interaction compared to annotating all the objects . Theoretically speaking , any function of the images induced by the discovered object could be a representation of the object . For example , let ϕ be an encoder implemented by a neural network , and let x be the image of an object , we can say that ϕ ( x ) is a representation of the object . Similarly , the encoder itself can also be a representation of this object since ϕ = argminϕ L ( ϕ , x ) , i.e. , ϕ is the output of an optimization procedure that takes the object ’ s images as input . We employ network weights as the object-centric representations . Specifically , the proposed method learns an object-centric representation from the data collected by manipulating a single object , through learning a latent code that can be translated into a neural network . The neural network is produced by a discriminative weight generation hypernetwork and is able to distinguish the represented object from anything else . In order to learn representations for objects that stream in one by one , the proposed framework is augmented with an object re-identification procedure to avoid learning seen objects . Moreover , we hypothesize that object representations are embedded in a low-dimensional manifold , so the proposed framework first checks whether a new object can be represented by learned objects ; if not , the new object will be learned as a base object serving the purpose of representing future objects , thus the name object pursuit . Furthermore , the proposed framework deals with the catastrophic forgetting of learned object representations by enforcing the hypernetwork to maintain the mapping between the learned representations and their corresponding network weights . In summary , our work makes the following contributions : 1 ) we propose a novel framework named object pursuit that can continuously learn object-centric representations using training data collected from interactions with individual objects , 2 ) we perform an extensive study to understand the pursuit dynamics and characterize its typical behaviors regarding the key design features , and 3 ) we analyze the learned object space , in terms of its succinctness and effectiveness in representing objects , and empirically demonstrate its potential for label efficient visual learning . 2 RELATED WORK . Object-centric representation learning falls in the field of disentangled representation learning ( Higgins et al. , 2016 ; Kim & Mnih , 2018 ; Press et al. , 2019 ; Chen et al. , 2018b ; Karras et al. , 2019 ; Li et al. , 2020 ; Locatello et al. , 2020a ; Zhou et al. , 2021 ) . However , object-centric representations require that the disentangled latents correspond to objects in the scene . For example , ( Eslami et al. , 2016 ; Kosiorek et al. , 2018 ) model image formation as a structured generative process so that each component may represent an object in the generated image . One can also apply inverse graphics ( Yao et al. , 2018 ; Wu et al. , 2017 ) or spatial mixture models ( Greff et al. , 2017 ; 2019 ; Engelcke et al. , 2020b ) to decompose images into interpretable latents . Monet ( Burgess et al. , 2019 ) jointly predicts segmentation and representation with a recurrent variational auto-encoder . Capsule autoencoders ( Kosiorek et al. , 2019 ) are proposed to decompose images into parts and poses that can be arranged into objects . To deal with complex images or scenes , ( Yang et al. , 2020 ; Bear et al. , 2020 ) employ motion to encourage deomposition into objects . Besides motion , ( Klindt et al. , 2021 ) shows that the transition statistics can be informative about objects in natural videos . Similarly , ( Kabra et al. , 2021 ) infers object latents and frame latents from videos . Slot-attention ( Locatello et al. , 2020b ; Jiang et al. , 2020 ) employs the attention mechanism that aggregates features with similar appearance , while Giraffe ( Niemeyer & Geiger , 2021 ) factorizes the scene using neural feature fields . Even though better performance is achieved with more sophisticated network designs , scenes with complex geometry and appearance still lag . As shown in ( Engelcke et al. , 2020a ) , the reconstruction bottleneck has critical effects on the disentanglement quality . Instead of relying on reconstruction as a learning signal , our work calls for interactions that stimulate and collect training data from complex environments . Rehearsal-based continual learning . In general , continual learning methods can be divided into three streams : rehearsal-based , regularization-based , and expansion-based . The rehearsal-based method manages buffers to replay past samples , in order to prevent from forgetting knowledge of the preceding tasks . The regularization-based methods learn to regularize the changes in parameters of the models . The expansion-based methods aim to expand model architectures in a dynamic manner . Among these three types , rehearsal-based methods are widely-used due to their simplicity and effectiveness ( Lüders et al. , 2016 ; Kemker & Kanan , 2017 ; Rebuffi et al. , 2017 ; Cha et al. , 2021 ; von Oswald et al. , 2019 ; Riemer et al. , 2018 ; Lopez-Paz & Ranzato , 2017 ; Buzzega et al. , 2020 ; Aljundi et al. , 2019 ; Chaudhry et al. , 2020 ; Parisi et al. , 2018 ; Lopez-Paz & Ranzato , 2017 ) . Samples from previous tasks can either be the data or corresponding network activations on the data . For example , ( Shin et al. , 2017 ) proposes a dual-model architecture where training data from learned tasks can be sampled from a generative model and ( Draelos et al. , 2017 ; Kamra et al. , 2017 ) propose sampling in the output space of an encoder for training tasks relying on an auto-encoder architecture . ICaRL Rebuffi et al . ( 2017 ) allows adding new classes progressively based on the training samples with a small number of classes , while ( Pellegrini et al. , 2020 ; Li & Hoiem , 2017 ) store activations volumes at some intermediate layer to alleviate the computation and storage requirement . Co2L ( Cha et al. , 2021 ) proposes continual learning within the contrastive representation learning framework , and ( Balaji et al. , 2020 ) studies continual learning in large scale where tasks in the input sequence are not limited to classification . Similar to the forgetting prevention component in our framework , von Oswald et al . ( 2019 ) applies a task-conditioned hypernetwork to rehearse the task-specific weight realizations . Please refer to ( Parisi et al. , 2019 ; Delange et al. , 2021 ) for a more comprehensive review on this subject . Hypernetwork . The goal of hypernetworks is to generate the weights of a target network , which is responsible for the main task ( Ha et al. , 2016 ; Krueger et al. , 2017 ; Chung et al. , 2016 ; Bertinetto et al. , 2016 ; Lorraine & Duvenaud , 2018 ; Sitzmann et al. , 2020 ; Nirkin et al. , 2021 ) . For example , ( Krueger et al. , 2017 ) proposes Bayesian hypernetworks to learn the variational inference in neural networks and ( Bertinetto et al. , 2016 ) proposes to learn the network parameters in one shot . HyperSeg ( Nirkin et al. , 2021 ) presents real-time semantic segmentation by employing a U-Net within a U-Net architecture , and ( Finn et al. , 2019 ) applies hypernetwork to adapt to new tasks for continual lifelong learning . Moreover , ( Tay et al. , 2020 ) proposes a new transformer architecture that leverages task-conditioned hypernetworks for controlling its feed-forward layers , whereas ( Ma et al. , 2021 ) proposes hyper-convolution , which implicitly represents the convolution kernel as a function of kernel coordinates . Hypernetworks have shown great potential in different meta-learning settings ( Rusu et al. , 2018 ; Munkhdalai & Yu , 2017 ; Wang et al. , 2019 ) , mainly due to that hypernetworks are effective in compressing the primary networks ’ weights as proved in ( Galanti & Wolf , 2020 ) . 3 METHOD . We consider an agent that can explore the environment and manipulate objects which are discovered in an unknown order . Suppose there are N objects in the scene , each of which randomly appears in an image x ∈ RH×W×3 , whose ground-truth instance segmentation mask is y ∈ RH×W×N . One can train a deep neural network that maps an image x to its mask y with a dataset D = { ( xi , yi ) } that consists of such paired training samples . However , sampling from the joint distribution p ( x , y ) can be extremely time-consuming , e.g. , someone may have to manually draw the instance masks for every object in an image . On the other hand , sampling from the marginals can be much more accessible through interactions . Let Dk be the dataset collected by observing an image xi and the corresponding binary mask of the k-th object yki ∈ RH×W , i.e. , Dk = { ( xi , yki ) } ∼ p ( x , yk ) , which is the marginal distribution obtained by integrating out other objects ’ masks in y . The goal of the proposed object pursuit framework is to learn object-centric representations from the data collected by continuously sampling the marginals . Next , we detail the representations used for objects ( as illustrated in Fig . 1 ) , and how we can learn them without catastrophic forgetting . 3.1 REPRESENTING OBJECTS VIA DISCRIMINATIVE WEIGHT GENERATION . In order to represent an object , one can compute any functions of the data produced with this object . For example , the encoding of an image containing a specific object that can be used to reconstruct the input image . Here we take a conjugate perspective instead of asking the representation to store information of an object that is good for reconstruction . We propose that the object-centric representation of an object shall generate the mechanisms for performing certain downstream tasks on this object , e.g. , distinguishing this object from the others . Let ϕ be a segmentation network with learnable weights θ that maps an image to a binary mask , i.e. , ϕ : Θ × RH×W×3 → RH×W . Moreover , let ψ : ζ → Θ be the mapping from the latent space ζ to the weights of the segmentation backbone ϕ . We define the object-centric representation of an object o as a latent zo ∈ ζ , such that : E ( xi , yoi ) ∼p ( x , yo ) ∆ ( ϕ ( ψ ( zo ) , xi ) , y o i ) ≥ τ , ( 1 ) where the expectation is computed according to p ( x , yo ) , i.e. , the marginal distribution of object o , and ∆ is a similarity measure between the prediction from ϕ and the sampled mask yo . In other words , zo is a representation of object o , if the network weights generated from zo are capable of predicting high-quality instance masks regarding the object under the corresponding marginal distribution . The threshold τ is a scalar parameter that will be studied in the experiments . Now we detail the proposed object pursuit framework , which unifies object re-identification , succinctness of the representation space , and forgetting prevention , for continuously learning object representations . | This paper presents a new framework to learn object-centric representations. The model is composed of a segmentation network and a hypernetwork. The hypernetwork takes the latent representation of a certain object as input and predicts the weights for the segmentation network. The latent representation and the hypernetwork are jointly optimized to maximize the discrimination power of the segmentation network. The framework also introduces a sparse representation mechanism so that each object can be represented in some base object representations. | SP:9aea99f6d15886147e932ee5de55dc24a00cbb67 |
Object Pursuit: Building a Space of Objects via Discriminative Weight Generation | 1 INTRODUCTION . What are human infants and toddlers learning while they are manipulating a discovered object ? And , how do such continual interaction and learning experiences , i.e. , objects are discovered and learned one by one , help develop the capability to understand the scenes that consist of individual objects ? Inspired by these questions , we aim for training frameworks that enable an autonomous agent to continuously learn object-centric representations through self-supervised discovery and manipulation of objects , so that the agent can later use the learned representations for visual scene understanding . A majority of object-centric representation learning methods focus on encoding images or video clips into disentangled latent codes , each of which explains an entity in the scene , and together they should reconstruct the input . However , without explicit supervision and more sophisticated inductive biases beyond parsimony , the disentanglement usually has difficulties aligning with objects , especially for complex scenes . We leverage the fact that an autonomous agent can actively explore the scene , and propose that the data collected by manipulating a discovered object can serve as an important source for building inductive biases for object-level disentanglement . In our proposed framework , whenever an object is discovered by the agent , a dataset containing images and instance masks of this object can easily be sampled via interaction compared to annotating all the objects . Theoretically speaking , any function of the images induced by the discovered object could be a representation of the object . For example , let ϕ be an encoder implemented by a neural network , and let x be the image of an object , we can say that ϕ ( x ) is a representation of the object . Similarly , the encoder itself can also be a representation of this object since ϕ = argminϕ L ( ϕ , x ) , i.e. , ϕ is the output of an optimization procedure that takes the object ’ s images as input . We employ network weights as the object-centric representations . Specifically , the proposed method learns an object-centric representation from the data collected by manipulating a single object , through learning a latent code that can be translated into a neural network . The neural network is produced by a discriminative weight generation hypernetwork and is able to distinguish the represented object from anything else . In order to learn representations for objects that stream in one by one , the proposed framework is augmented with an object re-identification procedure to avoid learning seen objects . Moreover , we hypothesize that object representations are embedded in a low-dimensional manifold , so the proposed framework first checks whether a new object can be represented by learned objects ; if not , the new object will be learned as a base object serving the purpose of representing future objects , thus the name object pursuit . Furthermore , the proposed framework deals with the catastrophic forgetting of learned object representations by enforcing the hypernetwork to maintain the mapping between the learned representations and their corresponding network weights . In summary , our work makes the following contributions : 1 ) we propose a novel framework named object pursuit that can continuously learn object-centric representations using training data collected from interactions with individual objects , 2 ) we perform an extensive study to understand the pursuit dynamics and characterize its typical behaviors regarding the key design features , and 3 ) we analyze the learned object space , in terms of its succinctness and effectiveness in representing objects , and empirically demonstrate its potential for label efficient visual learning . 2 RELATED WORK . Object-centric representation learning falls in the field of disentangled representation learning ( Higgins et al. , 2016 ; Kim & Mnih , 2018 ; Press et al. , 2019 ; Chen et al. , 2018b ; Karras et al. , 2019 ; Li et al. , 2020 ; Locatello et al. , 2020a ; Zhou et al. , 2021 ) . However , object-centric representations require that the disentangled latents correspond to objects in the scene . For example , ( Eslami et al. , 2016 ; Kosiorek et al. , 2018 ) model image formation as a structured generative process so that each component may represent an object in the generated image . One can also apply inverse graphics ( Yao et al. , 2018 ; Wu et al. , 2017 ) or spatial mixture models ( Greff et al. , 2017 ; 2019 ; Engelcke et al. , 2020b ) to decompose images into interpretable latents . Monet ( Burgess et al. , 2019 ) jointly predicts segmentation and representation with a recurrent variational auto-encoder . Capsule autoencoders ( Kosiorek et al. , 2019 ) are proposed to decompose images into parts and poses that can be arranged into objects . To deal with complex images or scenes , ( Yang et al. , 2020 ; Bear et al. , 2020 ) employ motion to encourage deomposition into objects . Besides motion , ( Klindt et al. , 2021 ) shows that the transition statistics can be informative about objects in natural videos . Similarly , ( Kabra et al. , 2021 ) infers object latents and frame latents from videos . Slot-attention ( Locatello et al. , 2020b ; Jiang et al. , 2020 ) employs the attention mechanism that aggregates features with similar appearance , while Giraffe ( Niemeyer & Geiger , 2021 ) factorizes the scene using neural feature fields . Even though better performance is achieved with more sophisticated network designs , scenes with complex geometry and appearance still lag . As shown in ( Engelcke et al. , 2020a ) , the reconstruction bottleneck has critical effects on the disentanglement quality . Instead of relying on reconstruction as a learning signal , our work calls for interactions that stimulate and collect training data from complex environments . Rehearsal-based continual learning . In general , continual learning methods can be divided into three streams : rehearsal-based , regularization-based , and expansion-based . The rehearsal-based method manages buffers to replay past samples , in order to prevent from forgetting knowledge of the preceding tasks . The regularization-based methods learn to regularize the changes in parameters of the models . The expansion-based methods aim to expand model architectures in a dynamic manner . Among these three types , rehearsal-based methods are widely-used due to their simplicity and effectiveness ( Lüders et al. , 2016 ; Kemker & Kanan , 2017 ; Rebuffi et al. , 2017 ; Cha et al. , 2021 ; von Oswald et al. , 2019 ; Riemer et al. , 2018 ; Lopez-Paz & Ranzato , 2017 ; Buzzega et al. , 2020 ; Aljundi et al. , 2019 ; Chaudhry et al. , 2020 ; Parisi et al. , 2018 ; Lopez-Paz & Ranzato , 2017 ) . Samples from previous tasks can either be the data or corresponding network activations on the data . For example , ( Shin et al. , 2017 ) proposes a dual-model architecture where training data from learned tasks can be sampled from a generative model and ( Draelos et al. , 2017 ; Kamra et al. , 2017 ) propose sampling in the output space of an encoder for training tasks relying on an auto-encoder architecture . ICaRL Rebuffi et al . ( 2017 ) allows adding new classes progressively based on the training samples with a small number of classes , while ( Pellegrini et al. , 2020 ; Li & Hoiem , 2017 ) store activations volumes at some intermediate layer to alleviate the computation and storage requirement . Co2L ( Cha et al. , 2021 ) proposes continual learning within the contrastive representation learning framework , and ( Balaji et al. , 2020 ) studies continual learning in large scale where tasks in the input sequence are not limited to classification . Similar to the forgetting prevention component in our framework , von Oswald et al . ( 2019 ) applies a task-conditioned hypernetwork to rehearse the task-specific weight realizations . Please refer to ( Parisi et al. , 2019 ; Delange et al. , 2021 ) for a more comprehensive review on this subject . Hypernetwork . The goal of hypernetworks is to generate the weights of a target network , which is responsible for the main task ( Ha et al. , 2016 ; Krueger et al. , 2017 ; Chung et al. , 2016 ; Bertinetto et al. , 2016 ; Lorraine & Duvenaud , 2018 ; Sitzmann et al. , 2020 ; Nirkin et al. , 2021 ) . For example , ( Krueger et al. , 2017 ) proposes Bayesian hypernetworks to learn the variational inference in neural networks and ( Bertinetto et al. , 2016 ) proposes to learn the network parameters in one shot . HyperSeg ( Nirkin et al. , 2021 ) presents real-time semantic segmentation by employing a U-Net within a U-Net architecture , and ( Finn et al. , 2019 ) applies hypernetwork to adapt to new tasks for continual lifelong learning . Moreover , ( Tay et al. , 2020 ) proposes a new transformer architecture that leverages task-conditioned hypernetworks for controlling its feed-forward layers , whereas ( Ma et al. , 2021 ) proposes hyper-convolution , which implicitly represents the convolution kernel as a function of kernel coordinates . Hypernetworks have shown great potential in different meta-learning settings ( Rusu et al. , 2018 ; Munkhdalai & Yu , 2017 ; Wang et al. , 2019 ) , mainly due to that hypernetworks are effective in compressing the primary networks ’ weights as proved in ( Galanti & Wolf , 2020 ) . 3 METHOD . We consider an agent that can explore the environment and manipulate objects which are discovered in an unknown order . Suppose there are N objects in the scene , each of which randomly appears in an image x ∈ RH×W×3 , whose ground-truth instance segmentation mask is y ∈ RH×W×N . One can train a deep neural network that maps an image x to its mask y with a dataset D = { ( xi , yi ) } that consists of such paired training samples . However , sampling from the joint distribution p ( x , y ) can be extremely time-consuming , e.g. , someone may have to manually draw the instance masks for every object in an image . On the other hand , sampling from the marginals can be much more accessible through interactions . Let Dk be the dataset collected by observing an image xi and the corresponding binary mask of the k-th object yki ∈ RH×W , i.e. , Dk = { ( xi , yki ) } ∼ p ( x , yk ) , which is the marginal distribution obtained by integrating out other objects ’ masks in y . The goal of the proposed object pursuit framework is to learn object-centric representations from the data collected by continuously sampling the marginals . Next , we detail the representations used for objects ( as illustrated in Fig . 1 ) , and how we can learn them without catastrophic forgetting . 3.1 REPRESENTING OBJECTS VIA DISCRIMINATIVE WEIGHT GENERATION . In order to represent an object , one can compute any functions of the data produced with this object . For example , the encoding of an image containing a specific object that can be used to reconstruct the input image . Here we take a conjugate perspective instead of asking the representation to store information of an object that is good for reconstruction . We propose that the object-centric representation of an object shall generate the mechanisms for performing certain downstream tasks on this object , e.g. , distinguishing this object from the others . Let ϕ be a segmentation network with learnable weights θ that maps an image to a binary mask , i.e. , ϕ : Θ × RH×W×3 → RH×W . Moreover , let ψ : ζ → Θ be the mapping from the latent space ζ to the weights of the segmentation backbone ϕ . We define the object-centric representation of an object o as a latent zo ∈ ζ , such that : E ( xi , yoi ) ∼p ( x , yo ) ∆ ( ϕ ( ψ ( zo ) , xi ) , y o i ) ≥ τ , ( 1 ) where the expectation is computed according to p ( x , yo ) , i.e. , the marginal distribution of object o , and ∆ is a similarity measure between the prediction from ϕ and the sampled mask yo . In other words , zo is a representation of object o , if the network weights generated from zo are capable of predicting high-quality instance masks regarding the object under the corresponding marginal distribution . The threshold τ is a scalar parameter that will be studied in the experiments . Now we detail the proposed object pursuit framework , which unifies object re-identification , succinctness of the representation space , and forgetting prevention , for continuously learning object representations . | This paper proposed a framework to continuously learn object-centric representations and formulate the problem by projecting the object representation space to the hypernetwork parameters for the segmentation task. The data are sampled from marginals where only one instance mask is collected in each scene. The representation learning incorporates the base representation learning, redundancy removal, and forgetting prevention. The experiments study the impact of \tau under different metrics. | SP:9aea99f6d15886147e932ee5de55dc24a00cbb67 |
Connecting Graph Convolution and Graph PCA | 1 INTRODUCTION . Graph neural networks ( GNNs ) are neural networks designed for the graph domain . Since the breakthrough of GCN ( Kipf & Welling , 2017 ) , which notably improved performance on the semisupervised node classification problem , many GNN variants have been proposed ; including GAT ( Veličković et al. , 2018 ) , GraphSAGE ( Hamilton et al. , 2017 ) , DGI ( Veličković et al. , 2019 ) , GIN ( Xu et al. , 2019 ) , PPNP and APPNP ( Klicpera et al. , 2019 ) , to name a few . Despite the empirical successes of GNNs in both node-level and graph-level tasks , they remain not well understood due to limited systematic and theoretical analysis of GNNs . For example , researchers have found that GNNs , unlike their non-graph counterparts , suffer from performance degradation with increasing depth , their expressive power decaying exponentially in number of layers ( Oono & Suzuki , 2020 ) . Such behavior is only partially explained by the oversmoothing phenomenon ( Li et al. , 2018 ; Zhao & Akoglu , 2020 ) . Another surprising observation shows that a Simplified Graph Convolution model , named SGC ( Wu et al. , 2019 ) , can achieve similar performance to various more complex GNNs on a variety of node classification tasks . Moreover , a simple baseline that does not utilize the graph structure altogether performs similar to state-of-the-art GNNs on graph classification tasks ( Errica et al. , 2020 ) . These observations call attention to studies for a better understanding of GNNs ( NT & Maehara , 2019 ; Morris et al. , 2019 ; Xu et al. , 2019 ; Oono & Suzuki , 2020 ; Loukas , 2020 ; Srinivasan & Ribeiro , 2020 ) . ( See Sec . 2 for more on understanding GNNs . ) Toward a systematic analysis and better understanding of GNNs , we establish a connection between the graph convolution operator of GCN ( and PPNP ) and Graph-regularized PCA ( GPCA ) ( Zhang & Zhao , 2012 ) , and show the similarity between GCN and stacking GPCA . This connection provides a deeper understanding of GCN ’ s power and limitation . Empirically , we also find that GPCA performance matches that of many GNN baselines on benchmark semi-supervised node classification tasks . We argue that the simple GPCA should be a strong baseline in future . What is more , the unsupervised stacking GPCA can be viewed as “ unsupervised GCN ” and provides a straightforward , yet systematic way to initialize GCN training . We summarize our contributions as follows : • Connection between Graph Convolution and GPCA : We establish the connection between the graph convolution operator of GCN ( also PPNP ) and the closed-form solution of graph-regularized PCA ( GPCA ) formulation . We demonstrate that a simple graph-regularized PCA paired with 1- or 2-layer MLP can achieve similar or even better results than state-of-the-art GNN baselines over several benchmark datasets . We further extend GPCA to ( semi- ) supervised setting which can generate embeddings using information of labels , which yields better performance on 3 out of 5 datasets . The outstanding performance of simple GPCA supports that the prowess of GCN on node classification task comes from graph based regularization . This motivates the study and design of other graph regularization techniques in the future . • GPCANET : New Stacking GPCA model : Capitalizing on the connection between GPCA and graph convolution , we design a new GNN model called GPCANET shaped by ( 1 ) stacking multiple GPCA layers and nonlinear transformations , and ( 2 ) fine-tuning end-to-end via supervised training . GPCANET is a generalized GCN model with adjustable hyperparameters that control the strength of graph regularization of each layer . We show that with stronger regularization , we can train GPCANET with fewer ( 1–3 ) layers and achieve comparable performance to much deeper GCNs . • First initialization strategy for GNNs : Capitalizing on the connection between GCN and GPCANET , we design a new strategy to initialize GCN training based on stacking GPCA , outperforming the popular Xaiver initialization ( Glorot & Bengio , 2010 ) . We show that the GPCANET-initialization is extremely effective for training deeper GCNs , that significantly improves the convergence speed , performance , and robustness . Notably , GPCANET-initialization is general-purpose and also applies to other GNNs . To our knowledge , it is the first initialization method specifically designed for GNNs . We open-source code at http : //bit.ly/GPCANet . All datasets are public-domain . 2 RELATED WORK . Understanding GNNs . Our work concerns learning on a single graph , hence we limit discussion of related work to node-level GNNs . GCN ’ s graph convolution is originally motivated from the approximation of graph filters in graph signal processing ( Kipf & Welling , 2017 ) . NT & Maehara ( 2019 ) show that graph convolution only performs low-pass filtering on original feature vectors , and also state a connection between graph filtering and Laplacian regularized least squares . Motivated by the oversmoothing phenomenon of graph convolution , Oono & Suzuki ( 2020 ) theoretically prove that GCN can only preserve information of node degrees and connected components when the number of layers goes to infinity , under some conditions of GCN weights . Recently several papers revisited the connection of graph convolution to graph-regularized optimization problem ( Li et al. , 2019 ; Ma et al. , 2020 ; Pan et al. , 2021 ; Zhao & Akoglu , 2020 ; Zhu et al. , 2021 ) , which is originally discussed in graph signal processing ( Shuman et al. , 2013 ) . More specifically , both Ma et al . ( 2020 ) and Zhu et al . ( 2021 ) relate graph-regularization optimization to several GNNs such as GCN ( Kipf & Welling , 2017 ) , APPNP ( Klicpera et al. , 2019 ) , and GAT ( Veličković et al. , 2018 ) . However , all previous work study these connections while ignoring the learnable parameters , which are essential for high-performance deep learning . Our work differs from these by establishing a stronger and closer connection to graph-regularized PCA that also takes learnable parameters into account . Graph-regularized PCA . PCA and its variants are standard linear dimensionality reduction approaches . Several work extend PCA to graph-structured data , such as Graph-Laplacian PCA ( Jiang et al. , 2013 ) and Manifold-regularized Matrix Factorization ( Zhang & Zhao , 2012 ) . For other variants , see Shahid et al . ( 2016 ) . Stacking Models and Deep Learning . The connection between CNN and stacking PCA has been explored in PCANet ( Chan et al. , 2015 ) , which demonstrated that the ( unsupervised ) simple stacking PCA works as well as supervised CNN over a large variety of vision tasks . The original PCANet is shallow and does not have nonlinear transformations , while PCANet+ ( Low et al. , 2017 ) overcomes these limitations and pushes the architecture much deeper . The idea of layerwise stacking for feature extraction is not new and was empirically observed to exhibit better representation ability in terms of classification . For a comprehensive review , we refer to Bengio et al . ( 2013 ) . Initialization . Traditionally , neural networks ( NNs ) were initialized with random weights generated from Gaussian distribution with zero mean and a small standard deviation ( Krizhevsky et al. , 2012 ) . As training deeper NNs became extremely difficult due to vanishing gradient and activation functions , Glorot & Bengio ( 2010 ) provided a specific weight initialization formula , named Xavier initialization , based on variance analysis without considering activation function . Xavier initialization is widely used for any type of NN even today , and it is the main initialization strategy used for GNNs . Later , He et al . ( 2015 ) adapted Xavier initialization to ReLU activation by considering a multiplier . Taking another direction , Saxe et al . ( 2013 ) analyzed the dynamics of training deep NNs and proposed random orthonormal initialization . Mishkin & Matas ( 2015 ) further improved orthonormal initialization for batch normalization ( Ioffe & Szegedy , 2015 ) . Different from these data-independent approaches , others ( Krähenbühl et al. , 2016 ; Seuret et al. , 2017 ; Wagner et al. , 2013 ) have employed datadependent techniques , like PCA , to initialize deep NNs . Although initialization has been widely studied for general NNs , no specific initialization has been proposed for GNNs . In this work , we propose a data-driven initialization technique ( based on GPCA ) , specific to GNNs for the first time . 3 GRAPH CONVOLUTION AND GPCA . 3.1 GRAPH CONVOLUTION . Consider a node-attributed input graph G = ( V , E , X ) with |V | = n nodes and |E| = m edges , where X ∈ Rn×d denotes the node feature matrix with d features . Broadly , graph convolution operation convolves the features ( or representations ) over the graph structure . GCN . Similar to other neural networks stacked with repeated layers , GCN contains multiple graph convolution layers each of which is followed by a nonlinear activation . Let H ( l ) be the l-th hidden layer representation , then , each GCN layer performs H ( l+1 ) = σ ( ÃsymH ( l ) W ( l ) ) ( 1 ) where Ãsym = D̃− 1 2 ( A+ I ) D̃− 1 2 denotes the n×n symmetrically normalized adjacency matrix with self-loops , D̃ is the diagonal degree matrix where D̃ii = 1 + ∑n j=1 Aij , W ( l ) depicts the l-th layer parameters ( to be learned ) , and σ is the nonlinear activation function . Formally , graph convolution is parameterized with W and maps an input X to a new representation Z as Z = ÃsymXW . ( 2 ) PPNP . For PPNP ( Klicpera et al. , 2019 ) , the features are first transformed by an MLP before convolving over the graph . Formally , the operation is revised as Z = µ ( I − ( 1− µ ) Ãsym ) −1 MLPW ( X ) = ( I + αL̃ ) −1 MLPW ( X ) ( 3 ) where we replace µ with α = ( 1− µ ) /µ , L̃ : = I − Ãsym denotes the normalized graph Laplacian , and W depicts the learnable MLP parameters . As matrix inverse is expensive , an approximate version called APPNP that employs the power method ( Golub & Van Loan , 1989 ) is often used in practice . 3.2 GRAPH-REGULARIZED PCA ( GPCA ) . As stated by Bengio et al . ( 2013 ) , “ Although depth is an important part of the story , many other priors are interesting and can be conveniently captured when the problem is cast as one of learning a representation. ” GPCA is one such representation learning technique with a graph-based prior . Standard PCA learns k-dimensional projections Z ∈ Rn×k of feature matrix X ∈ Rn×d , aiming to minimize the reconstruction error ∥X − ZWT ∥2F , ( 4 ) subject to W ∈ Rd×k being an orthonormal basis . GPCA extends this formalism to graph-structured data by additionally assuming either smoothing bases ( Jiang et al. , 2013 ) or smoothing projections ( Zhang & Zhao , 2012 ) over the graph . In this work we consider the latter case where low-dimensional projections are smooth over the input graph G , where L̃ = I− Ãsym denotes its normalized Laplacian matrix . The objective formulation of GPCA is then given as min Z , W ∥X − ZWT ∥2F + αTr ( ZT L̃Z ) s.t . WTW = I ( 5 ) where α is a hyperparameter that balances reconstruction error and the variation of the projections over the graph . Note that the first part of Eq . equation 5 , along with the constraint , corresponds to the objective of the original PCA , while the second part is a graph regularization term that aims to “ smooth ” the learned representations Z over the graph structure . As such , GPCA becomes the standard PCA when α = 0 . Similar to PCA , the problem ( 5 ) is non-convex but has a closed-form solution ( Zhang & Zhao , 2012 ) . Surprisingly , as we show , it has a close connection with the graph convolution formulation in Eq . equation 2 . In the following , we give the GPCA solution and then detail its connection to graph convolution . Theorem 3.1 . GPCA with formulation shown in ( 5 ) has the optimal solution ( Z∗ , W ∗ ) following Z∗ = ( I + αL̃ ) −1XW ∗ , and W ∗ = ( w1 , w2 , ... , wk ) ( 6 ) where w1 , ... , wk are the eigenvectors of XT ( I+αL̃ ) −1X corresponding to the largest k eigenvalues . Proof . The proof can be found in Appendix . A.1 . | This paper relates GCN to PCA from the perspective of optimization. The authors propose Graph PCA that is a general form of GCN. They further introduce a regularization term that enforces nodes with same labels close to each other. | SP:b31551ab379915f28477cb1f49699cb811a91d29 |
Connecting Graph Convolution and Graph PCA | 1 INTRODUCTION . Graph neural networks ( GNNs ) are neural networks designed for the graph domain . Since the breakthrough of GCN ( Kipf & Welling , 2017 ) , which notably improved performance on the semisupervised node classification problem , many GNN variants have been proposed ; including GAT ( Veličković et al. , 2018 ) , GraphSAGE ( Hamilton et al. , 2017 ) , DGI ( Veličković et al. , 2019 ) , GIN ( Xu et al. , 2019 ) , PPNP and APPNP ( Klicpera et al. , 2019 ) , to name a few . Despite the empirical successes of GNNs in both node-level and graph-level tasks , they remain not well understood due to limited systematic and theoretical analysis of GNNs . For example , researchers have found that GNNs , unlike their non-graph counterparts , suffer from performance degradation with increasing depth , their expressive power decaying exponentially in number of layers ( Oono & Suzuki , 2020 ) . Such behavior is only partially explained by the oversmoothing phenomenon ( Li et al. , 2018 ; Zhao & Akoglu , 2020 ) . Another surprising observation shows that a Simplified Graph Convolution model , named SGC ( Wu et al. , 2019 ) , can achieve similar performance to various more complex GNNs on a variety of node classification tasks . Moreover , a simple baseline that does not utilize the graph structure altogether performs similar to state-of-the-art GNNs on graph classification tasks ( Errica et al. , 2020 ) . These observations call attention to studies for a better understanding of GNNs ( NT & Maehara , 2019 ; Morris et al. , 2019 ; Xu et al. , 2019 ; Oono & Suzuki , 2020 ; Loukas , 2020 ; Srinivasan & Ribeiro , 2020 ) . ( See Sec . 2 for more on understanding GNNs . ) Toward a systematic analysis and better understanding of GNNs , we establish a connection between the graph convolution operator of GCN ( and PPNP ) and Graph-regularized PCA ( GPCA ) ( Zhang & Zhao , 2012 ) , and show the similarity between GCN and stacking GPCA . This connection provides a deeper understanding of GCN ’ s power and limitation . Empirically , we also find that GPCA performance matches that of many GNN baselines on benchmark semi-supervised node classification tasks . We argue that the simple GPCA should be a strong baseline in future . What is more , the unsupervised stacking GPCA can be viewed as “ unsupervised GCN ” and provides a straightforward , yet systematic way to initialize GCN training . We summarize our contributions as follows : • Connection between Graph Convolution and GPCA : We establish the connection between the graph convolution operator of GCN ( also PPNP ) and the closed-form solution of graph-regularized PCA ( GPCA ) formulation . We demonstrate that a simple graph-regularized PCA paired with 1- or 2-layer MLP can achieve similar or even better results than state-of-the-art GNN baselines over several benchmark datasets . We further extend GPCA to ( semi- ) supervised setting which can generate embeddings using information of labels , which yields better performance on 3 out of 5 datasets . The outstanding performance of simple GPCA supports that the prowess of GCN on node classification task comes from graph based regularization . This motivates the study and design of other graph regularization techniques in the future . • GPCANET : New Stacking GPCA model : Capitalizing on the connection between GPCA and graph convolution , we design a new GNN model called GPCANET shaped by ( 1 ) stacking multiple GPCA layers and nonlinear transformations , and ( 2 ) fine-tuning end-to-end via supervised training . GPCANET is a generalized GCN model with adjustable hyperparameters that control the strength of graph regularization of each layer . We show that with stronger regularization , we can train GPCANET with fewer ( 1–3 ) layers and achieve comparable performance to much deeper GCNs . • First initialization strategy for GNNs : Capitalizing on the connection between GCN and GPCANET , we design a new strategy to initialize GCN training based on stacking GPCA , outperforming the popular Xaiver initialization ( Glorot & Bengio , 2010 ) . We show that the GPCANET-initialization is extremely effective for training deeper GCNs , that significantly improves the convergence speed , performance , and robustness . Notably , GPCANET-initialization is general-purpose and also applies to other GNNs . To our knowledge , it is the first initialization method specifically designed for GNNs . We open-source code at http : //bit.ly/GPCANet . All datasets are public-domain . 2 RELATED WORK . Understanding GNNs . Our work concerns learning on a single graph , hence we limit discussion of related work to node-level GNNs . GCN ’ s graph convolution is originally motivated from the approximation of graph filters in graph signal processing ( Kipf & Welling , 2017 ) . NT & Maehara ( 2019 ) show that graph convolution only performs low-pass filtering on original feature vectors , and also state a connection between graph filtering and Laplacian regularized least squares . Motivated by the oversmoothing phenomenon of graph convolution , Oono & Suzuki ( 2020 ) theoretically prove that GCN can only preserve information of node degrees and connected components when the number of layers goes to infinity , under some conditions of GCN weights . Recently several papers revisited the connection of graph convolution to graph-regularized optimization problem ( Li et al. , 2019 ; Ma et al. , 2020 ; Pan et al. , 2021 ; Zhao & Akoglu , 2020 ; Zhu et al. , 2021 ) , which is originally discussed in graph signal processing ( Shuman et al. , 2013 ) . More specifically , both Ma et al . ( 2020 ) and Zhu et al . ( 2021 ) relate graph-regularization optimization to several GNNs such as GCN ( Kipf & Welling , 2017 ) , APPNP ( Klicpera et al. , 2019 ) , and GAT ( Veličković et al. , 2018 ) . However , all previous work study these connections while ignoring the learnable parameters , which are essential for high-performance deep learning . Our work differs from these by establishing a stronger and closer connection to graph-regularized PCA that also takes learnable parameters into account . Graph-regularized PCA . PCA and its variants are standard linear dimensionality reduction approaches . Several work extend PCA to graph-structured data , such as Graph-Laplacian PCA ( Jiang et al. , 2013 ) and Manifold-regularized Matrix Factorization ( Zhang & Zhao , 2012 ) . For other variants , see Shahid et al . ( 2016 ) . Stacking Models and Deep Learning . The connection between CNN and stacking PCA has been explored in PCANet ( Chan et al. , 2015 ) , which demonstrated that the ( unsupervised ) simple stacking PCA works as well as supervised CNN over a large variety of vision tasks . The original PCANet is shallow and does not have nonlinear transformations , while PCANet+ ( Low et al. , 2017 ) overcomes these limitations and pushes the architecture much deeper . The idea of layerwise stacking for feature extraction is not new and was empirically observed to exhibit better representation ability in terms of classification . For a comprehensive review , we refer to Bengio et al . ( 2013 ) . Initialization . Traditionally , neural networks ( NNs ) were initialized with random weights generated from Gaussian distribution with zero mean and a small standard deviation ( Krizhevsky et al. , 2012 ) . As training deeper NNs became extremely difficult due to vanishing gradient and activation functions , Glorot & Bengio ( 2010 ) provided a specific weight initialization formula , named Xavier initialization , based on variance analysis without considering activation function . Xavier initialization is widely used for any type of NN even today , and it is the main initialization strategy used for GNNs . Later , He et al . ( 2015 ) adapted Xavier initialization to ReLU activation by considering a multiplier . Taking another direction , Saxe et al . ( 2013 ) analyzed the dynamics of training deep NNs and proposed random orthonormal initialization . Mishkin & Matas ( 2015 ) further improved orthonormal initialization for batch normalization ( Ioffe & Szegedy , 2015 ) . Different from these data-independent approaches , others ( Krähenbühl et al. , 2016 ; Seuret et al. , 2017 ; Wagner et al. , 2013 ) have employed datadependent techniques , like PCA , to initialize deep NNs . Although initialization has been widely studied for general NNs , no specific initialization has been proposed for GNNs . In this work , we propose a data-driven initialization technique ( based on GPCA ) , specific to GNNs for the first time . 3 GRAPH CONVOLUTION AND GPCA . 3.1 GRAPH CONVOLUTION . Consider a node-attributed input graph G = ( V , E , X ) with |V | = n nodes and |E| = m edges , where X ∈ Rn×d denotes the node feature matrix with d features . Broadly , graph convolution operation convolves the features ( or representations ) over the graph structure . GCN . Similar to other neural networks stacked with repeated layers , GCN contains multiple graph convolution layers each of which is followed by a nonlinear activation . Let H ( l ) be the l-th hidden layer representation , then , each GCN layer performs H ( l+1 ) = σ ( ÃsymH ( l ) W ( l ) ) ( 1 ) where Ãsym = D̃− 1 2 ( A+ I ) D̃− 1 2 denotes the n×n symmetrically normalized adjacency matrix with self-loops , D̃ is the diagonal degree matrix where D̃ii = 1 + ∑n j=1 Aij , W ( l ) depicts the l-th layer parameters ( to be learned ) , and σ is the nonlinear activation function . Formally , graph convolution is parameterized with W and maps an input X to a new representation Z as Z = ÃsymXW . ( 2 ) PPNP . For PPNP ( Klicpera et al. , 2019 ) , the features are first transformed by an MLP before convolving over the graph . Formally , the operation is revised as Z = µ ( I − ( 1− µ ) Ãsym ) −1 MLPW ( X ) = ( I + αL̃ ) −1 MLPW ( X ) ( 3 ) where we replace µ with α = ( 1− µ ) /µ , L̃ : = I − Ãsym denotes the normalized graph Laplacian , and W depicts the learnable MLP parameters . As matrix inverse is expensive , an approximate version called APPNP that employs the power method ( Golub & Van Loan , 1989 ) is often used in practice . 3.2 GRAPH-REGULARIZED PCA ( GPCA ) . As stated by Bengio et al . ( 2013 ) , “ Although depth is an important part of the story , many other priors are interesting and can be conveniently captured when the problem is cast as one of learning a representation. ” GPCA is one such representation learning technique with a graph-based prior . Standard PCA learns k-dimensional projections Z ∈ Rn×k of feature matrix X ∈ Rn×d , aiming to minimize the reconstruction error ∥X − ZWT ∥2F , ( 4 ) subject to W ∈ Rd×k being an orthonormal basis . GPCA extends this formalism to graph-structured data by additionally assuming either smoothing bases ( Jiang et al. , 2013 ) or smoothing projections ( Zhang & Zhao , 2012 ) over the graph . In this work we consider the latter case where low-dimensional projections are smooth over the input graph G , where L̃ = I− Ãsym denotes its normalized Laplacian matrix . The objective formulation of GPCA is then given as min Z , W ∥X − ZWT ∥2F + αTr ( ZT L̃Z ) s.t . WTW = I ( 5 ) where α is a hyperparameter that balances reconstruction error and the variation of the projections over the graph . Note that the first part of Eq . equation 5 , along with the constraint , corresponds to the objective of the original PCA , while the second part is a graph regularization term that aims to “ smooth ” the learned representations Z over the graph structure . As such , GPCA becomes the standard PCA when α = 0 . Similar to PCA , the problem ( 5 ) is non-convex but has a closed-form solution ( Zhang & Zhao , 2012 ) . Surprisingly , as we show , it has a close connection with the graph convolution formulation in Eq . equation 2 . In the following , we give the GPCA solution and then detail its connection to graph convolution . Theorem 3.1 . GPCA with formulation shown in ( 5 ) has the optimal solution ( Z∗ , W ∗ ) following Z∗ = ( I + αL̃ ) −1XW ∗ , and W ∗ = ( w1 , w2 , ... , wk ) ( 6 ) where w1 , ... , wk are the eigenvectors of XT ( I+αL̃ ) −1X corresponding to the largest k eigenvalues . Proof . The proof can be found in Appendix . A.1 . | This manuscript looks at the classic graph-regularized PCA (GPCA) and try to build the connection with GPCA and the state-of-the-art GCN, and finally proposes a new deep graph network GPCANet. The authors make a number of clear contributions as listed in the paper: 1) they build the connection between GPCA and GCN, 2) Based on this connection, they propose novel way of using such GPCA as a graph layer or as an initialization process for training GCN etc. and 3) thus present a new architecture of GPCANet that performs well on their tests. | SP:b31551ab379915f28477cb1f49699cb811a91d29 |
Maximizing Ensemble Diversity in Deep Reinforcement Learning | 1 INTRODUCTION . Reinforcement learning ( RL ) agents trained with high capacity function approximators such a deep neural networks have shown to solve complex sequential decision-making problems , including the board games of Chess , GO and Shogi ( Silver et al. , 2016 ; 2017 ; 2018 ) , achieving super-human performance in video games ( Mnih et al. , 2015 ; Vinyals et al. , 2019 ) and solving robotic manipulation tasks ( Liu et al. , 2021 ) . Despite achieving these tremendous goals , modern deep reinforcement learning ( DRL ) algorithms have plethora of limitations . For example , it is well-known that DRL algorithms are sample-inefficient and require stupendous amount of environment interactions to learn an optimal policy ( Łukasz Kaiser et al. , 2020 ) . Additional problems encountered and exacerbates during training a DRL agent includes the overestimation bias that occurs while estimating the target values for Q-learning ( Fujimoto et al. , 2018 ; Lan et al. , 2020 ; Hado van Hasselt et al. , 2016 ) , error propagation during Bellman backup ( Kumar et al. , 2019 ) and trade-off between exploration and exploitation ( Chen et al. , 2017 ) . Recently , the use of ensemble has been a popular choice to address the above mentioned issues . These methods combine multiple neural networks to model the value functions or ( and ) the policy ( Osband et al. , 2016 ; Chen et al. , 2017 ; Lan et al. , 2020 ; Lee et al. , 2020 ) . For example , TD3 ( Fujimoto et al. , 2018 ) used two critics to address the overestimation bias problem in continuous control problems while MaxminDQN ( Lan et al. , 2020 ) provided a mechanism to use the cardinality of the ensemble to use as a knob to tune between over and under estimation in deep Q-learning . Similarly , Bootstrapped DQN ( Osband et al. , 2016 ; Chen et al. , 2017 ) used ensemble for effective exploration . ∗Partial work done while being a Ph.D student at University of Central Florida The primary insight of this paper is that the performance of ensemble based methods is contingent on maintaining sufficient diversity between the neural networks of the ensemble . If the neural networks in the ensembles converge to a common representation ( we will show that this is the case in many scenarios ) , the performance of these approaches significantly degrades . We note that even with different representations , the Q-values will still converge towards a shared optimum , but they are statistically less likely to follow the same learning trajectory elsewhere . In this paper , we propose Maximize Ensemble Diversity in Reinforcement Learning ( MED-RL ) , a set of regularization methods inspired from the economics and consensus optimization to improve diversity and to prevent the collapse of the representations in the ensemble-based deep reinforcement learning methods by encouraging inequality between the networks during training . The objective of the regularizers is solely to keep the representations different , while still allowing the models to converge to the optimal Q-value . The motivation for the regularizers came from topic of income distribution in economic theory that provides a rich source of mathematical formulations that measure inequality . While in economics , high inequality is seen as a negative , in our case we used the inequality metrics to encourage diversity between the neural networks . To summarize , our contributions are following : 1 . We empirically show that high representation similarity between neural network based Q-functions leads to degradation in performance in ensemble based Q-learning methods . 2 . To mitigate this , we propose five regularizers based on inequality measures from economics theory and consensus optimization that maximize diversity between the neural networks in ensemble based reinforcement learning methods . 3 . We integrated MED-RL in TD3 ( Fujimoto et al. , 2018 ) , SAC ( Haarnoja et al. , 2018 ) and REDQ ( Chen et al. , 2021 ) for continuous control tasks and in MaxminDQN ( Lan et al. , 2020 ) and EnsembleDQN ( Anschel et al. , 2017 ) for discrete control tasks and evaluated on six Mujoco environments and six Atari games . Our results show that MED-RL augmented algorithms outperform their un-regularized counterparts significantly and in some cases achieved more than 300 % in performance gains and are up to 75 % more sample-efficient . 4 . We also show that MED-RL augmented SAC is more sample-efficient than REDQ , an ensemble based method specifically designed for sample-efficiency , and can achieve similar performance to REDQ up to 50 times faster on wall-clock time . 2 RELATED WORK . Ensembles in Deep RL : Use of an ensemble of neural networks in Deep RL has been studied in several recent studies for different purposes . In ( Fujimoto et al. , 2018 ; Anschel et al. , 2017 ; Lan et al. , 2020 ) have used an ensemble to address the overestimation bias in deep Q-learning based methods for both continuous and discrete control tasks . Similarly , Bootstrapped DQN and extensions ( Osband et al. , 2016 ; Chen et al. , 2017 ) have leveraged ensemble of neural networks for efficient exploration . The problem of error propagation in Bellman backup was addressed in ( Kumar et al. , 2019 ) using an ensemble of neural networks . Sample efficiency , a notorious problem in RL has taken advantage from an ensemble ( Chen et al. , 2021 ) . Recently , SUNRISE ( Lee et al. , 2020 ) proposed a unified framework for ensemble-based deep reinforcement learning . Diversity in Ensembles : Diversity in neural network ensembles has been studied years before the resurgence of deep learning ( Brown , 2004 ) . Even though diversity is an important topic in neural networks , most of the studies in this topic revolve around addressing problems in supervised learning settings . More recently there has been a number of studies that have diversity in ensembles to measure and improve model uncertainty . Jain et al . ( 2020 ) have proposed a diversity regularizer to improve the uncertainty estimates in out-of-data distribution . Lee et al . ( 2015 ) have used Multiple choice Learning to learn diverse Convolutional Neural Networks for image recognition . Regularization in Reinforcement Learning : Regularization in reinforcement learning has been used to perform effective exploration and learning generalized policies . For instance , ( Grau-Moya et al. , 2019 ) uses mutual-information regularization to optimize a prior action distribution for better performance and exploration , ( Cheng et al. , 2019 ) regularizes the policy π ( a|s ) using a control prior , ( Galashov et al. , 2019 ) uses temporal difference error regularization to reduce variance in Generalized Advantage Estimation ( Schulman et al. , 2016 ) . Generalization in reinforcement learning refers to the performance of the policy on different environment compared to the training environment . For example , ( Farebrother et al. , 2018 ) studied the effect ofL2 norm on DQN on generalization , ( Tobin et al. , 2017 ) studied generalization between simulations vs. the real world , ( Pattanaik et al. , 2018 ) studied parameter variations and ( Zhang et al. , 2018 ) studied the effect of different random seeds in environment generation . Diversity in Reinforcement Learning : Diversity in reinforcement learning is active area of research . ( Pacchiano et al. , 2020 ) uses Determinantal Point Processes to promote behavioral diversity , Lupu et al . ( 2021 ) have used policy diversity to improve zero-shot coordination in multi-agent setting . ( Tang et al. , 2021 ) uses reward randomization for discovering diverse strategic policies in complex multi-agent games . In ( Li et al. , 2021 ) proposed CDS that uses information-theoretical objective to maximize the mutual information between agents ’ identities and trajectories and encourage diversity . More recently ( An et al. , 2021 ) have used diversified Q-ensembles to address overestimation in offline reinforcement learning . Representation Similarity : Measuring similarity between the representations learned by different neural networks is an active area of research . For instance , ( Raghu et al. , 2017 ) used Canonical Correlation Analysis ( CCA ) to measure the representation similarity . CCA find two basis matrices such that when original matrices are projected on these bases , the correlation is maximized . ( Raghu et al. , 2017 ; Mroueh et al. , 2015 ) used truncated singular value decomposition on the activations to make it robust for perturbations . Other work such as ( Li et al. , 2015 ) and ( Wang et al. , 2018 ) studied the correlation between the neurons in the neural networks . 3 BACKGROUND . Reinforcement learning : We consider an agent as a Markov Decision Process ( MDP ) defined as a five element tuple ( S , A , P , r , γ ) , where S is the state space , A is the action space , P : S ×A×S → [ 0 , 1 ] are the state-action transition probabilities , r : S × A × S → R is the reward mapping and γ → [ 0 , 1 ] is the discount factor . At each time step t the agent observes the state of the environment st ∈ S and selects an action at ∈ A . The effect of the action triggers a transition to a new state st+1 ∈ S according to the transition probabilities P , while the agent receives a scalar reward Rt = r ( st , at , st+1 ) . The goal of the agent is to learn a policy π that maximizes the expectation of the discounted sum of future rewards . Representation Similarity Measure : LetX ∈ Rn×p1 denote a matrix of activations of p1 neurons for n examples and Y ∈ Rn×p2 denote a matrix of activations of p2 neurons for the same n examples . Furthermore , we consider Kij = k ( xi , xj ) and Lij = l ( yi , yj ) where k and l are two kernels . Centered Kernel Alignment ( CKA ) ( Kornblith et al. , 2019 ; Cortes et al. , 2012 ; Cristianini et al. , 2002 ) is a method for comparing representations of neural networks , and identifying correspondences between layers , not only in the same network but also on different neural network architectures . CKA is a normalized form of Hilbert-Schmidt Independence Criterion ( HSIC ) ( Gretton et al. , 2005 ) . Formally , CKA is defined as : CKA ( K , L ) = HSIC ( K , L ) √ HSIC ( K , K ) · HSIC ( L , L ) HSIC is a test statistic for determining whether two sets of variables are independent . The empirical estimator of HSIC is defined as : HSIC ( K , L ) = 1 ( n− 1 ) 2 tr ( KHLH ) where H is the centering matrix Hn = In − 1 n 11T . 4 MAXIMIZE ENSEMBLE DIVERSITY IN REINFORCEMENT LEARNING . In this section , we propose MED-RL : Maximize Ensemble Diversity in Reinforcement Learning , a set of regularizers inspired from the Economics and consensus optimization to improve diversity and to prevent the collapse of the representations in the ensemble-based deep reinforcement learning methods by encouraging inequality between the networks during training . This section is organized as follows : 1 . We empirically show that high representation similarity between neural network based Q-functions leads to degradation in performance in ensemble based Q-learning methods . 2. we present the Economics theory and consensus optimization inspired regularizers with their mathematical formulation . | MED-RL This paper studies foster diversity in ensemble of DRL networks by regularization methods. The paper is an empirical one and compared five ensemble methods with and without their diversity algorithm in six Mujoco and six Atari games, and showed some results. The algorithm proposed is a modification of MaxMinDQN by Lan et.al. with a regularization. | SP:8268301968d5bb106483c7603117df2335d64610 |
Maximizing Ensemble Diversity in Deep Reinforcement Learning | 1 INTRODUCTION . Reinforcement learning ( RL ) agents trained with high capacity function approximators such a deep neural networks have shown to solve complex sequential decision-making problems , including the board games of Chess , GO and Shogi ( Silver et al. , 2016 ; 2017 ; 2018 ) , achieving super-human performance in video games ( Mnih et al. , 2015 ; Vinyals et al. , 2019 ) and solving robotic manipulation tasks ( Liu et al. , 2021 ) . Despite achieving these tremendous goals , modern deep reinforcement learning ( DRL ) algorithms have plethora of limitations . For example , it is well-known that DRL algorithms are sample-inefficient and require stupendous amount of environment interactions to learn an optimal policy ( Łukasz Kaiser et al. , 2020 ) . Additional problems encountered and exacerbates during training a DRL agent includes the overestimation bias that occurs while estimating the target values for Q-learning ( Fujimoto et al. , 2018 ; Lan et al. , 2020 ; Hado van Hasselt et al. , 2016 ) , error propagation during Bellman backup ( Kumar et al. , 2019 ) and trade-off between exploration and exploitation ( Chen et al. , 2017 ) . Recently , the use of ensemble has been a popular choice to address the above mentioned issues . These methods combine multiple neural networks to model the value functions or ( and ) the policy ( Osband et al. , 2016 ; Chen et al. , 2017 ; Lan et al. , 2020 ; Lee et al. , 2020 ) . For example , TD3 ( Fujimoto et al. , 2018 ) used two critics to address the overestimation bias problem in continuous control problems while MaxminDQN ( Lan et al. , 2020 ) provided a mechanism to use the cardinality of the ensemble to use as a knob to tune between over and under estimation in deep Q-learning . Similarly , Bootstrapped DQN ( Osband et al. , 2016 ; Chen et al. , 2017 ) used ensemble for effective exploration . ∗Partial work done while being a Ph.D student at University of Central Florida The primary insight of this paper is that the performance of ensemble based methods is contingent on maintaining sufficient diversity between the neural networks of the ensemble . If the neural networks in the ensembles converge to a common representation ( we will show that this is the case in many scenarios ) , the performance of these approaches significantly degrades . We note that even with different representations , the Q-values will still converge towards a shared optimum , but they are statistically less likely to follow the same learning trajectory elsewhere . In this paper , we propose Maximize Ensemble Diversity in Reinforcement Learning ( MED-RL ) , a set of regularization methods inspired from the economics and consensus optimization to improve diversity and to prevent the collapse of the representations in the ensemble-based deep reinforcement learning methods by encouraging inequality between the networks during training . The objective of the regularizers is solely to keep the representations different , while still allowing the models to converge to the optimal Q-value . The motivation for the regularizers came from topic of income distribution in economic theory that provides a rich source of mathematical formulations that measure inequality . While in economics , high inequality is seen as a negative , in our case we used the inequality metrics to encourage diversity between the neural networks . To summarize , our contributions are following : 1 . We empirically show that high representation similarity between neural network based Q-functions leads to degradation in performance in ensemble based Q-learning methods . 2 . To mitigate this , we propose five regularizers based on inequality measures from economics theory and consensus optimization that maximize diversity between the neural networks in ensemble based reinforcement learning methods . 3 . We integrated MED-RL in TD3 ( Fujimoto et al. , 2018 ) , SAC ( Haarnoja et al. , 2018 ) and REDQ ( Chen et al. , 2021 ) for continuous control tasks and in MaxminDQN ( Lan et al. , 2020 ) and EnsembleDQN ( Anschel et al. , 2017 ) for discrete control tasks and evaluated on six Mujoco environments and six Atari games . Our results show that MED-RL augmented algorithms outperform their un-regularized counterparts significantly and in some cases achieved more than 300 % in performance gains and are up to 75 % more sample-efficient . 4 . We also show that MED-RL augmented SAC is more sample-efficient than REDQ , an ensemble based method specifically designed for sample-efficiency , and can achieve similar performance to REDQ up to 50 times faster on wall-clock time . 2 RELATED WORK . Ensembles in Deep RL : Use of an ensemble of neural networks in Deep RL has been studied in several recent studies for different purposes . In ( Fujimoto et al. , 2018 ; Anschel et al. , 2017 ; Lan et al. , 2020 ) have used an ensemble to address the overestimation bias in deep Q-learning based methods for both continuous and discrete control tasks . Similarly , Bootstrapped DQN and extensions ( Osband et al. , 2016 ; Chen et al. , 2017 ) have leveraged ensemble of neural networks for efficient exploration . The problem of error propagation in Bellman backup was addressed in ( Kumar et al. , 2019 ) using an ensemble of neural networks . Sample efficiency , a notorious problem in RL has taken advantage from an ensemble ( Chen et al. , 2021 ) . Recently , SUNRISE ( Lee et al. , 2020 ) proposed a unified framework for ensemble-based deep reinforcement learning . Diversity in Ensembles : Diversity in neural network ensembles has been studied years before the resurgence of deep learning ( Brown , 2004 ) . Even though diversity is an important topic in neural networks , most of the studies in this topic revolve around addressing problems in supervised learning settings . More recently there has been a number of studies that have diversity in ensembles to measure and improve model uncertainty . Jain et al . ( 2020 ) have proposed a diversity regularizer to improve the uncertainty estimates in out-of-data distribution . Lee et al . ( 2015 ) have used Multiple choice Learning to learn diverse Convolutional Neural Networks for image recognition . Regularization in Reinforcement Learning : Regularization in reinforcement learning has been used to perform effective exploration and learning generalized policies . For instance , ( Grau-Moya et al. , 2019 ) uses mutual-information regularization to optimize a prior action distribution for better performance and exploration , ( Cheng et al. , 2019 ) regularizes the policy π ( a|s ) using a control prior , ( Galashov et al. , 2019 ) uses temporal difference error regularization to reduce variance in Generalized Advantage Estimation ( Schulman et al. , 2016 ) . Generalization in reinforcement learning refers to the performance of the policy on different environment compared to the training environment . For example , ( Farebrother et al. , 2018 ) studied the effect ofL2 norm on DQN on generalization , ( Tobin et al. , 2017 ) studied generalization between simulations vs. the real world , ( Pattanaik et al. , 2018 ) studied parameter variations and ( Zhang et al. , 2018 ) studied the effect of different random seeds in environment generation . Diversity in Reinforcement Learning : Diversity in reinforcement learning is active area of research . ( Pacchiano et al. , 2020 ) uses Determinantal Point Processes to promote behavioral diversity , Lupu et al . ( 2021 ) have used policy diversity to improve zero-shot coordination in multi-agent setting . ( Tang et al. , 2021 ) uses reward randomization for discovering diverse strategic policies in complex multi-agent games . In ( Li et al. , 2021 ) proposed CDS that uses information-theoretical objective to maximize the mutual information between agents ’ identities and trajectories and encourage diversity . More recently ( An et al. , 2021 ) have used diversified Q-ensembles to address overestimation in offline reinforcement learning . Representation Similarity : Measuring similarity between the representations learned by different neural networks is an active area of research . For instance , ( Raghu et al. , 2017 ) used Canonical Correlation Analysis ( CCA ) to measure the representation similarity . CCA find two basis matrices such that when original matrices are projected on these bases , the correlation is maximized . ( Raghu et al. , 2017 ; Mroueh et al. , 2015 ) used truncated singular value decomposition on the activations to make it robust for perturbations . Other work such as ( Li et al. , 2015 ) and ( Wang et al. , 2018 ) studied the correlation between the neurons in the neural networks . 3 BACKGROUND . Reinforcement learning : We consider an agent as a Markov Decision Process ( MDP ) defined as a five element tuple ( S , A , P , r , γ ) , where S is the state space , A is the action space , P : S ×A×S → [ 0 , 1 ] are the state-action transition probabilities , r : S × A × S → R is the reward mapping and γ → [ 0 , 1 ] is the discount factor . At each time step t the agent observes the state of the environment st ∈ S and selects an action at ∈ A . The effect of the action triggers a transition to a new state st+1 ∈ S according to the transition probabilities P , while the agent receives a scalar reward Rt = r ( st , at , st+1 ) . The goal of the agent is to learn a policy π that maximizes the expectation of the discounted sum of future rewards . Representation Similarity Measure : LetX ∈ Rn×p1 denote a matrix of activations of p1 neurons for n examples and Y ∈ Rn×p2 denote a matrix of activations of p2 neurons for the same n examples . Furthermore , we consider Kij = k ( xi , xj ) and Lij = l ( yi , yj ) where k and l are two kernels . Centered Kernel Alignment ( CKA ) ( Kornblith et al. , 2019 ; Cortes et al. , 2012 ; Cristianini et al. , 2002 ) is a method for comparing representations of neural networks , and identifying correspondences between layers , not only in the same network but also on different neural network architectures . CKA is a normalized form of Hilbert-Schmidt Independence Criterion ( HSIC ) ( Gretton et al. , 2005 ) . Formally , CKA is defined as : CKA ( K , L ) = HSIC ( K , L ) √ HSIC ( K , K ) · HSIC ( L , L ) HSIC is a test statistic for determining whether two sets of variables are independent . The empirical estimator of HSIC is defined as : HSIC ( K , L ) = 1 ( n− 1 ) 2 tr ( KHLH ) where H is the centering matrix Hn = In − 1 n 11T . 4 MAXIMIZE ENSEMBLE DIVERSITY IN REINFORCEMENT LEARNING . In this section , we propose MED-RL : Maximize Ensemble Diversity in Reinforcement Learning , a set of regularizers inspired from the Economics and consensus optimization to improve diversity and to prevent the collapse of the representations in the ensemble-based deep reinforcement learning methods by encouraging inequality between the networks during training . This section is organized as follows : 1 . We empirically show that high representation similarity between neural network based Q-functions leads to degradation in performance in ensemble based Q-learning methods . 2. we present the Economics theory and consensus optimization inspired regularizers with their mathematical formulation . | The paper considers a problem of ensemble-based deep RL methods that ensembles of critic networks converge to the same point in the representation space. To address it, the paper proposes a regularization technique that forces representations of a critic network to be dissimilar from those of other critic networks in the ensemble. It is empirically shown that this regularization technique improves the sample efficiency and asymptotic performance of baseline algorithms. | SP:8268301968d5bb106483c7603117df2335d64610 |
First-Order Optimization Inspired from Finite-Time Convergent Flows | 1 INTRODUCTION . Consider the unconstrained minimization problem for a given cost function f : Rn → R. When f is sufficiently regular , the standard algorithm in continuous time ( dynamical system ) is given by ẋ = FGF ( x ) , −∇f ( x ) ( 1 ) with ẋ , ddtx ( t ) , known as the gradient flow ( GF ) . Generalizing GF , the q-rescaled GF ( qRGF ) Wibisono et al . ( 2016 ) given by ẋ = Fq−RGF ( x ) = −c ∇f ( x ) ‖∇f ( x ) ‖ q−2 q−1 2 ( 2 ) with c > 0 and q ∈ ( 1 , ∞ ] has an asymptotic convergence rate f ( x ( t ) ) − f ( x ? ) = O ( 1 tq−1 ) under mild regularity , for ‖x ( 0 ) − x ? ‖ > 0 small enough , where x ? ∈ Rn denotes a local minimizer of f . However , it has been recently proven Romero & Benosman ( 2020 ) that q-RGF , as well as q-signed GF ( q-SGF ) ẋ = Fq−SGF ( x ) = −c ‖∇f ( x ) ‖ 1 q−1 1 sign ( ∇f ( x ) ) , ( 3 ) where sign ( · ) denotes the sign function ( element-wise ) , are both finite-time convergent ( in continuoustime ) , provided that f is gradient dominated of order p ∈ ( 1 , q ) . In particular , if f is strongly convex , then q-RGF and q-SGF is finite-time convergent for any q ∈ ( 2 , ∞ ] , since f must be gradient dominated of order p = 2 . Considering that many algorithms are inspired from continuous flows with convergence guarantee , e.g. , Muehlebach & Jordan ( 2019 ) ; Fazlyab et al . ( 2017a ) ; Shi et al . ( 2018 ) ; Zhang et al . ( 2018 ) ; França et al . ( 2019b ) ; Wibisono et al . ( 2016 ) , a natural question arises : what is the convergence rate of the corresponding discrete-time algorithms , which are induced by discretization of finite-time convergent continuous-time flows ? 1.1 CONTRIBUTION . In this paper , we investigate the convergence behavior of an Euler discretization for the q-RGF ( equation 2 ) and q-SGF ( equation 3 ) . We provide convergence guarantees in terms of closeness of solutions , using results from hybrid dynamical control theory . Furthermore , we provide iteration / sample complexity upper bounds for both the general ( deterministic ) and stochastic settings . We then test the performance of the proposed algorithms on both synthetic and real-world data in the context of deep learning , namely , on the well-known SVHN dataset . 1.2 RELATED WORK . Propelled by the work of Wang & Elia ( 2011 ) and Su et al . ( 2014 ) , there has been a recent and significant research effort dedicated to analyzing optimization algorithms from the perspective of dynamical systems and control theory , especially in continuous time ( Wibisono et al. , 2016 ; Wilson , 2018 ; Lessard et al. , 2016 ; Fazlyab et al. , 2017b ; Scieur et al. , 2017 ; Franca et al. , 2018 ; Fazlyab et al. , 2018 ; Fazlyab et al. , 2018 ; Taylor et al. , 2018 ; França et al. , 2019a ; Orvieto & Lucchi , 2019 ; Romero et al. , 2019 ; Muehlebach & Jordan , 2019 ) . A major focus within this initiative is in accceleration , both in terms of trying to gain new insight into more traditional optimization algorithms from this perspective , or even to exploit the interplay between continuous-time systems and their potential discretizations for novel algorithm design ( Muehlebach & Jordan , 2019 ; Fazlyab et al. , 2017a ; Shi et al. , 2018 ; Zhang et al. , 2018 ; França et al. , 2019b ; Wilson et al. , 2019 ) . Many of these papers also focus on deriving convergence rates based on the discretization of flows designed in the continuous-time domain . Connecting ordinary differential equations ( ODEs ) and their numerical analysis , with optimization algorithms is a very important topic , which can be dated back to 1970s , see Botsaris ( 1978a ; b ) ; Zghier ( 1981 ) ; Snyman ( 1982 ; 1983 ) ; Brockett ( 1988 ) ; Brown ( 1989 ) . In Helmke & Moore ( 1994 ) , the authors studied relationships between linear programming , ODEs , and general matrix theory . Further , Schropp ( 1995 ) and Schropp & Singer ( 2000 ) explored several aspects linking nonlinear dynamical systems to gradient-based optimization , including nonlinear constraints . Tools from Lyapunov stability theory are often employed for the analysis , mainly because there already exists a rich body of works within the nonlinear systems and control theory community for this purpose . In particular , typically in previous works , one seeks asymptotically Lyapunov stable gradient-based systems with an equilibrium ( stationary point ) at an isolated extremum of the given cost function , thus certifying local convergence . Naturally , the global asymptotic stability leads to global convergence , though such analysis will typically require the cost function to be strongly convex everywhere . For physical systems , a Lyapunov function can often be constructed from first principles via some physically meaningful measure of energy ( e.g. , total energy = potential energy + kinetic energy ) . In optimization , the situation is somewhat similar in the sense that a suitable Lyapunov function may often be constructed by taking simple surrogates of the objective function as candidates . For instance , V ( x ) , f ( x ) − f ( x ? ) can be a good initial candidate . Further , if f is continuously differentiable and x ? is an isolated stationary point , then another alternative is V ( x ) , ‖∇f ( x ) ‖2 . However , most fundamental and applied research conducted in systems and control regarding Lyapunov stability theory deals exclusively with continuous-time systems . Unfortunately , ( dynamical ) stability properties are generally not preserved for simple forward-Euler and sample-and-hold discretizations and control laws ( Stuart & Humphries , 1998 ) . Furthermore , practical implementations of optimization algorithms in modern digital computers demand discrete-time . Nonetheless , it has been extensively noted that a vast amount of general Lyapunov-based results ( perhaps most ) appear to have a discrete-time equivalent . 2 PRELIMINARIES . 2.1 OPTIMIZATION ALGORITHMS AS DISCRETE-TIME SYSTEMS . Generalizing ( 1 ) , ( 2 ) , and ( 3 ) , consider a continuous-time algorithm ( dynamical system ) modeled via an ordinary differential equation ( ODE ) ẋ = F ( x ) ( 4 ) for t ≥ 0 , or , more generally , a differential inclusion ẋ ( t ) ∈ F ( x ( t ) ) ( 5 ) a.e . t ≥ 0 ( e.g . for the q = ∞ case ) , such that x ( t ) → x ? as t → t ? . In the case of the q-RGF ( 2 ) and q-SGF ( 3 ) for f gradient dominated of order p ∈ ( 1 , q ) , we have finite-time convergence , and thus t ? = t ? ( x ( 0 ) ) < ∞ . Most of the popular numerical optimization schemes can be written in a state-space form ( i.e. , recursively ) , as Xk+1 = Fd ( k , Xk ) ( 6a ) xk = G ( Xk ) ( 6b ) for k ∈ Z+ , { 0 , 1 , 2 , . . . } and a given X0 ∈ Rm ( typically m ≥ n ) , where Fd : Z+ × Rm → Rm and G : Rm → Rn . Naturally , ( 6 ) can be seen as a discrete-time dynamical system constructed by discretizing ( 4 ) in time . In particular , we have xk ≈ x ( tk ) , where { 0 = t0 < t1 < t2 < . . . } denotes a time partition and x ( · ) a solution to ( 4 ) or ( 5 ) as appropriate . Therefore , we call Xk and xk , respectively , the state and output at time step k. Example 1 . The standard gradient descent ( GD ) algorithm xk+1 = xk − η∇f ( xk ) ( 7 ) with step size ( learning rate ) η > 0 can be readily written in the form ( 6 ) by taking m = n , Fd ( x ) , x− η∇f ( x ) , and G ( x ) , x . • If the step sizes are adaptive , i.e . if we replace η by a sequence { ηk } with ηk > 0 , then we only need to replace Fd ( k , x ) , x− ηk∇f ( x ) , provided that { ηk } is not computed using feedback from { xk } ( e.g . through a line search method ) . • If we do wish to use time-varying step-size , then we can set m = n + 1 , G ( [ x ; η ] ) , x , and Fd ( [ x ; η ] ) , [ F ( 1 ) d ( [ x ; η ] ) ; F ( 2 ) d ( [ x ; η ] ) ] , where F ( 1 ) d ( [ x ; η ] ) , x − η∇f ( x ) , and F ( 2 ) d is a user-defined function that dictates the updates in the step size . In particular , an open-loop adaptive step size { ηk } may be achieved under this scenario , provided that it is possible to write ηk+1 = F ( 2 ) d ( ηk ) . • If we wish to use individual step sizes for each the n components of { xk } , then it suffices to take ηk as an n-dimensional vector ( thus m = 2n ) , and make appropriate changes in Fd and G. In each of these cases , GD can be seen as a forward-Euler discretization of the GF ( 1 ) , i.e. , xk+1 = xk + ∆tkFGF ( xk ) ( 8 ) with FGF = −∇f and adaptive time step ∆tk , tk+1 − tk chosen as ∆tk = ηk . Example 2 . The proximal point algorithm ( PPA ) xk+1 = arg min x∈Rn { f ( x ) + 1 2ηk ‖x− xk‖22 } ( 9 ) with step size ηk > 0 ( open loop , for simplicity ) can also be written in the form ( 6 ) , by taking m = n , Fd ( k , x ) , arg minx′∈Rn { f ( x′ ) + 12ηk ‖x ′ − x‖22 } , and G ( x ) , x . Naturally , we need to assume sufficient regularity for Fd ( k , x ) to exist and we must design a consistent way to choose Fd ( k , x ) when multiple minimizers exist in the definition of Fd ( k , x ) . Alternatively , these conditions must be satisfied , at the very least , at every ( k , x ) ∈ { ( 0 , x0 ) , ( 1 , x1 ) , ( 2 , x2 ) , . . . } for a particular chosen initial x0 ∈ Rn . By assuming sufficient regularity , we have∇x { f ( x ) + 12ηk ‖x− xk‖ 2 2 } |x=xk+1 = 0 , and thus ∇f ( xk+1 ) + 1 ηk ( xk+1 − xk ) = 0 ⇐⇒ xk+1 = xk + ∆tkFGF ( xk+1 ) ( 10 ) with ∆tk = ηk , which is precisely the backward-Euler discretization of the GF ( 1 ) . 2.2 CONTINUOUS-TIME CONVERGENCE OF q-RGF AND q-SGF In this section , we review the necessary conditions to ensure finite-time convergence of these flows . Here the hyperparameter c > 0 in equation 2 and equation 3 will not be explicitly denoted in Fq−RGF , Fq−SGF . Next , borrowing terminologies from Wilson et al . ( 2019 ) , we define the notion of gradient dominance with order p as following . Assumption 1 ( Gradient Dominance of Order p ) . For a continuously differentiable function f , we assume the function f is µ-gradient dominated of order p ∈ ( 1 , ∞ ] ( µ > 0 ) , i.e. , p− 1 p ‖∇f ( x ) ‖ p p−1 2 ≥ µ 1 p−1 ( f ( x ) − f ( x ? ) ) , ∀ x ∈ Rn , ( 11 ) where x ? ∈ arg minx∈Rn f ( x ) is the local minimizer , also we denote the optimal value f ? , f ( x ? ) . Remark 1 . It can be proved that continuously differentiable strongly convex functions are gradient dominated of order p = 2 . In fact , gradient dominance is usually defined exclusively for order p = 2 , often referred to as the Polyak-Łojasiewicz ( PL ) inequality , which was introduced by Polyak ( 1963 ) to relax the ( strong ) convexity assumption commonly used to show convergence of the GD algorithm ( 7 ) . The PL inequality can also be used to relax convexity assumptions of similar gradient and proximal-gradient methods ( Karimi et al. , 2016 ; Attouch & Bolte , 2009 ) . Furthermore , if f is gradient dominated ( of any order ) w.r.t . x ? , then x ? is an isolated stationary point of f . Our adopted generalized notion of gradient dominance is strongly tied to the Łojasiewicz gradient inequality from real analytic geometry , established by Łojasiewicz ( 1963 ; 1965 ) 1 independently and simultaneously from Polyak ( 1963 ) , and generalizing the PL inequality . More precisely , this inequality is typically written as : for some C > 0 and θ ∈ ( 1 2 , 1 ] , ‖∇f ( x ) ‖2 ≥ C · |f ( x ) − f ? |θ ( 12 ) holds for every x ∈ Rn in a small enough open neighborhood of the stationary point x = x ? . This inequality is guaranteed for analytic functions Łojasiewicz ( 1965 ) . More precisely , when x ? is a local minimizer of f , the aforementioned relationship is explicitly given by C = ( p p− 1 ) p−1 p µ 1 p , θ = p− 1 p . ( 13 ) Therefore , analytic functions are always gradient dominated . However , while analytic functions are always smooth , smoothness is not required to attain gradient dominance . Also recently it is shown that in reinforcement learning , the value functions with softmax parameterization satisfies the above condition ( Mei et al. , 2020 ; 2021 ) , which further rationalizes our settings . The following results in Romero & Benosman ( 2020 ) summarized the finite-time convergence of q-RGF ( equation 2 ) and q-SGF ( equation 3 ) in continuous-time sense , which also motivates the main topic in this paper . Theorem 1 ( Romero & Benosman ( 2020 ) ) . Suppose that f : Rn → R is continuously differentiable and µ-gradient dominated of order p ∈ ( 1 , ∞ ) near a strict local minimizer x ? ∈ Rn . Let c > 0 and q ∈ ( p , ∞ ] . Then , any maximal solution x ( · ) , in the sense of Filippov , to the q-RGF ( 2 ) or q-SGF ( 3 ) will converge in finite time to x ? , provided that ‖x ( 0 ) −x ? ‖2 > 0 is sufficiently small . More precisely , lim t→t ? x ( t ) = x ? , where the convergence time t ? < ∞ may depend on which flow is used , but in both cases is upper bounded by t ? ≤ ‖∇f ( x0 ) ‖ 1 θ− 1 θ′ 2 cC 1 θ ( 1− θθ′ ) , ( 14 ) where x0 = x ( 0 ) , C = ( p p−1 ) p−1 p µ 1 p , θ = p−1p , and θ ′ = q−1q . In particular , given any compact and positively invariant subset S ⊂ D , both flows converge in finite time with the aforementioned convergence time upper bound ( which can be tightened by replacing D with S ) for any x0 ∈ S. Furthermore , if D = Rn , then we have global finite-time convergence , i.e . finite-time convergence to any maximal solution ( in the sense of Filippov ) x ( · ) with arbitrary x0 ∈ Rn . In essence , the analysis introduced in Romero & Benosman ( 2020 ) consists of leveraging the gradient dominance to show that the energy function E ( t ) , f ( x ( t ) ) − f ? satisfies the Lyapunov-like differential inequality Ė ( t ) = O ( E ( t ) α ) for some α < 1 . The detailed proof is recalled in Appendix C for completeness . 1For more modern treatments in English , see Łojasiewicz & Zurro ( 1999 ) ; Bolte et al . ( 2007 ) | This paper studies the convergence rates of the two first-order methods, named $q$-RGF and $q$-SGF. These are constructed by forward Euler discretizing the $q$-rescaled gradient flow ($q$-RGF) [Wibisono et al., 2016] and $q$-signed GF ($q$-SGF) [Romero-Benosman, 2020], respectively. These gradient flows are shown to converge in finite time in [Romero-Benosman, 2020], under the gradient dominant condition of order $q$. The authors show that their forward Euler discretized versions have linear rates, under an additional Lipschitz smoothness of order $q$. The paper also considers their stochastic variants. Numerical experiments on toy examples and practical example illustrate that the proposed method might have practical advantage over existing methods such as GD and ADAM. | SP:62d9ff0ab002c3a5d47b3cfcf2a645336d3aad2d |
First-Order Optimization Inspired from Finite-Time Convergent Flows | 1 INTRODUCTION . Consider the unconstrained minimization problem for a given cost function f : Rn → R. When f is sufficiently regular , the standard algorithm in continuous time ( dynamical system ) is given by ẋ = FGF ( x ) , −∇f ( x ) ( 1 ) with ẋ , ddtx ( t ) , known as the gradient flow ( GF ) . Generalizing GF , the q-rescaled GF ( qRGF ) Wibisono et al . ( 2016 ) given by ẋ = Fq−RGF ( x ) = −c ∇f ( x ) ‖∇f ( x ) ‖ q−2 q−1 2 ( 2 ) with c > 0 and q ∈ ( 1 , ∞ ] has an asymptotic convergence rate f ( x ( t ) ) − f ( x ? ) = O ( 1 tq−1 ) under mild regularity , for ‖x ( 0 ) − x ? ‖ > 0 small enough , where x ? ∈ Rn denotes a local minimizer of f . However , it has been recently proven Romero & Benosman ( 2020 ) that q-RGF , as well as q-signed GF ( q-SGF ) ẋ = Fq−SGF ( x ) = −c ‖∇f ( x ) ‖ 1 q−1 1 sign ( ∇f ( x ) ) , ( 3 ) where sign ( · ) denotes the sign function ( element-wise ) , are both finite-time convergent ( in continuoustime ) , provided that f is gradient dominated of order p ∈ ( 1 , q ) . In particular , if f is strongly convex , then q-RGF and q-SGF is finite-time convergent for any q ∈ ( 2 , ∞ ] , since f must be gradient dominated of order p = 2 . Considering that many algorithms are inspired from continuous flows with convergence guarantee , e.g. , Muehlebach & Jordan ( 2019 ) ; Fazlyab et al . ( 2017a ) ; Shi et al . ( 2018 ) ; Zhang et al . ( 2018 ) ; França et al . ( 2019b ) ; Wibisono et al . ( 2016 ) , a natural question arises : what is the convergence rate of the corresponding discrete-time algorithms , which are induced by discretization of finite-time convergent continuous-time flows ? 1.1 CONTRIBUTION . In this paper , we investigate the convergence behavior of an Euler discretization for the q-RGF ( equation 2 ) and q-SGF ( equation 3 ) . We provide convergence guarantees in terms of closeness of solutions , using results from hybrid dynamical control theory . Furthermore , we provide iteration / sample complexity upper bounds for both the general ( deterministic ) and stochastic settings . We then test the performance of the proposed algorithms on both synthetic and real-world data in the context of deep learning , namely , on the well-known SVHN dataset . 1.2 RELATED WORK . Propelled by the work of Wang & Elia ( 2011 ) and Su et al . ( 2014 ) , there has been a recent and significant research effort dedicated to analyzing optimization algorithms from the perspective of dynamical systems and control theory , especially in continuous time ( Wibisono et al. , 2016 ; Wilson , 2018 ; Lessard et al. , 2016 ; Fazlyab et al. , 2017b ; Scieur et al. , 2017 ; Franca et al. , 2018 ; Fazlyab et al. , 2018 ; Fazlyab et al. , 2018 ; Taylor et al. , 2018 ; França et al. , 2019a ; Orvieto & Lucchi , 2019 ; Romero et al. , 2019 ; Muehlebach & Jordan , 2019 ) . A major focus within this initiative is in accceleration , both in terms of trying to gain new insight into more traditional optimization algorithms from this perspective , or even to exploit the interplay between continuous-time systems and their potential discretizations for novel algorithm design ( Muehlebach & Jordan , 2019 ; Fazlyab et al. , 2017a ; Shi et al. , 2018 ; Zhang et al. , 2018 ; França et al. , 2019b ; Wilson et al. , 2019 ) . Many of these papers also focus on deriving convergence rates based on the discretization of flows designed in the continuous-time domain . Connecting ordinary differential equations ( ODEs ) and their numerical analysis , with optimization algorithms is a very important topic , which can be dated back to 1970s , see Botsaris ( 1978a ; b ) ; Zghier ( 1981 ) ; Snyman ( 1982 ; 1983 ) ; Brockett ( 1988 ) ; Brown ( 1989 ) . In Helmke & Moore ( 1994 ) , the authors studied relationships between linear programming , ODEs , and general matrix theory . Further , Schropp ( 1995 ) and Schropp & Singer ( 2000 ) explored several aspects linking nonlinear dynamical systems to gradient-based optimization , including nonlinear constraints . Tools from Lyapunov stability theory are often employed for the analysis , mainly because there already exists a rich body of works within the nonlinear systems and control theory community for this purpose . In particular , typically in previous works , one seeks asymptotically Lyapunov stable gradient-based systems with an equilibrium ( stationary point ) at an isolated extremum of the given cost function , thus certifying local convergence . Naturally , the global asymptotic stability leads to global convergence , though such analysis will typically require the cost function to be strongly convex everywhere . For physical systems , a Lyapunov function can often be constructed from first principles via some physically meaningful measure of energy ( e.g. , total energy = potential energy + kinetic energy ) . In optimization , the situation is somewhat similar in the sense that a suitable Lyapunov function may often be constructed by taking simple surrogates of the objective function as candidates . For instance , V ( x ) , f ( x ) − f ( x ? ) can be a good initial candidate . Further , if f is continuously differentiable and x ? is an isolated stationary point , then another alternative is V ( x ) , ‖∇f ( x ) ‖2 . However , most fundamental and applied research conducted in systems and control regarding Lyapunov stability theory deals exclusively with continuous-time systems . Unfortunately , ( dynamical ) stability properties are generally not preserved for simple forward-Euler and sample-and-hold discretizations and control laws ( Stuart & Humphries , 1998 ) . Furthermore , practical implementations of optimization algorithms in modern digital computers demand discrete-time . Nonetheless , it has been extensively noted that a vast amount of general Lyapunov-based results ( perhaps most ) appear to have a discrete-time equivalent . 2 PRELIMINARIES . 2.1 OPTIMIZATION ALGORITHMS AS DISCRETE-TIME SYSTEMS . Generalizing ( 1 ) , ( 2 ) , and ( 3 ) , consider a continuous-time algorithm ( dynamical system ) modeled via an ordinary differential equation ( ODE ) ẋ = F ( x ) ( 4 ) for t ≥ 0 , or , more generally , a differential inclusion ẋ ( t ) ∈ F ( x ( t ) ) ( 5 ) a.e . t ≥ 0 ( e.g . for the q = ∞ case ) , such that x ( t ) → x ? as t → t ? . In the case of the q-RGF ( 2 ) and q-SGF ( 3 ) for f gradient dominated of order p ∈ ( 1 , q ) , we have finite-time convergence , and thus t ? = t ? ( x ( 0 ) ) < ∞ . Most of the popular numerical optimization schemes can be written in a state-space form ( i.e. , recursively ) , as Xk+1 = Fd ( k , Xk ) ( 6a ) xk = G ( Xk ) ( 6b ) for k ∈ Z+ , { 0 , 1 , 2 , . . . } and a given X0 ∈ Rm ( typically m ≥ n ) , where Fd : Z+ × Rm → Rm and G : Rm → Rn . Naturally , ( 6 ) can be seen as a discrete-time dynamical system constructed by discretizing ( 4 ) in time . In particular , we have xk ≈ x ( tk ) , where { 0 = t0 < t1 < t2 < . . . } denotes a time partition and x ( · ) a solution to ( 4 ) or ( 5 ) as appropriate . Therefore , we call Xk and xk , respectively , the state and output at time step k. Example 1 . The standard gradient descent ( GD ) algorithm xk+1 = xk − η∇f ( xk ) ( 7 ) with step size ( learning rate ) η > 0 can be readily written in the form ( 6 ) by taking m = n , Fd ( x ) , x− η∇f ( x ) , and G ( x ) , x . • If the step sizes are adaptive , i.e . if we replace η by a sequence { ηk } with ηk > 0 , then we only need to replace Fd ( k , x ) , x− ηk∇f ( x ) , provided that { ηk } is not computed using feedback from { xk } ( e.g . through a line search method ) . • If we do wish to use time-varying step-size , then we can set m = n + 1 , G ( [ x ; η ] ) , x , and Fd ( [ x ; η ] ) , [ F ( 1 ) d ( [ x ; η ] ) ; F ( 2 ) d ( [ x ; η ] ) ] , where F ( 1 ) d ( [ x ; η ] ) , x − η∇f ( x ) , and F ( 2 ) d is a user-defined function that dictates the updates in the step size . In particular , an open-loop adaptive step size { ηk } may be achieved under this scenario , provided that it is possible to write ηk+1 = F ( 2 ) d ( ηk ) . • If we wish to use individual step sizes for each the n components of { xk } , then it suffices to take ηk as an n-dimensional vector ( thus m = 2n ) , and make appropriate changes in Fd and G. In each of these cases , GD can be seen as a forward-Euler discretization of the GF ( 1 ) , i.e. , xk+1 = xk + ∆tkFGF ( xk ) ( 8 ) with FGF = −∇f and adaptive time step ∆tk , tk+1 − tk chosen as ∆tk = ηk . Example 2 . The proximal point algorithm ( PPA ) xk+1 = arg min x∈Rn { f ( x ) + 1 2ηk ‖x− xk‖22 } ( 9 ) with step size ηk > 0 ( open loop , for simplicity ) can also be written in the form ( 6 ) , by taking m = n , Fd ( k , x ) , arg minx′∈Rn { f ( x′ ) + 12ηk ‖x ′ − x‖22 } , and G ( x ) , x . Naturally , we need to assume sufficient regularity for Fd ( k , x ) to exist and we must design a consistent way to choose Fd ( k , x ) when multiple minimizers exist in the definition of Fd ( k , x ) . Alternatively , these conditions must be satisfied , at the very least , at every ( k , x ) ∈ { ( 0 , x0 ) , ( 1 , x1 ) , ( 2 , x2 ) , . . . } for a particular chosen initial x0 ∈ Rn . By assuming sufficient regularity , we have∇x { f ( x ) + 12ηk ‖x− xk‖ 2 2 } |x=xk+1 = 0 , and thus ∇f ( xk+1 ) + 1 ηk ( xk+1 − xk ) = 0 ⇐⇒ xk+1 = xk + ∆tkFGF ( xk+1 ) ( 10 ) with ∆tk = ηk , which is precisely the backward-Euler discretization of the GF ( 1 ) . 2.2 CONTINUOUS-TIME CONVERGENCE OF q-RGF AND q-SGF In this section , we review the necessary conditions to ensure finite-time convergence of these flows . Here the hyperparameter c > 0 in equation 2 and equation 3 will not be explicitly denoted in Fq−RGF , Fq−SGF . Next , borrowing terminologies from Wilson et al . ( 2019 ) , we define the notion of gradient dominance with order p as following . Assumption 1 ( Gradient Dominance of Order p ) . For a continuously differentiable function f , we assume the function f is µ-gradient dominated of order p ∈ ( 1 , ∞ ] ( µ > 0 ) , i.e. , p− 1 p ‖∇f ( x ) ‖ p p−1 2 ≥ µ 1 p−1 ( f ( x ) − f ( x ? ) ) , ∀ x ∈ Rn , ( 11 ) where x ? ∈ arg minx∈Rn f ( x ) is the local minimizer , also we denote the optimal value f ? , f ( x ? ) . Remark 1 . It can be proved that continuously differentiable strongly convex functions are gradient dominated of order p = 2 . In fact , gradient dominance is usually defined exclusively for order p = 2 , often referred to as the Polyak-Łojasiewicz ( PL ) inequality , which was introduced by Polyak ( 1963 ) to relax the ( strong ) convexity assumption commonly used to show convergence of the GD algorithm ( 7 ) . The PL inequality can also be used to relax convexity assumptions of similar gradient and proximal-gradient methods ( Karimi et al. , 2016 ; Attouch & Bolte , 2009 ) . Furthermore , if f is gradient dominated ( of any order ) w.r.t . x ? , then x ? is an isolated stationary point of f . Our adopted generalized notion of gradient dominance is strongly tied to the Łojasiewicz gradient inequality from real analytic geometry , established by Łojasiewicz ( 1963 ; 1965 ) 1 independently and simultaneously from Polyak ( 1963 ) , and generalizing the PL inequality . More precisely , this inequality is typically written as : for some C > 0 and θ ∈ ( 1 2 , 1 ] , ‖∇f ( x ) ‖2 ≥ C · |f ( x ) − f ? |θ ( 12 ) holds for every x ∈ Rn in a small enough open neighborhood of the stationary point x = x ? . This inequality is guaranteed for analytic functions Łojasiewicz ( 1965 ) . More precisely , when x ? is a local minimizer of f , the aforementioned relationship is explicitly given by C = ( p p− 1 ) p−1 p µ 1 p , θ = p− 1 p . ( 13 ) Therefore , analytic functions are always gradient dominated . However , while analytic functions are always smooth , smoothness is not required to attain gradient dominance . Also recently it is shown that in reinforcement learning , the value functions with softmax parameterization satisfies the above condition ( Mei et al. , 2020 ; 2021 ) , which further rationalizes our settings . The following results in Romero & Benosman ( 2020 ) summarized the finite-time convergence of q-RGF ( equation 2 ) and q-SGF ( equation 3 ) in continuous-time sense , which also motivates the main topic in this paper . Theorem 1 ( Romero & Benosman ( 2020 ) ) . Suppose that f : Rn → R is continuously differentiable and µ-gradient dominated of order p ∈ ( 1 , ∞ ) near a strict local minimizer x ? ∈ Rn . Let c > 0 and q ∈ ( p , ∞ ] . Then , any maximal solution x ( · ) , in the sense of Filippov , to the q-RGF ( 2 ) or q-SGF ( 3 ) will converge in finite time to x ? , provided that ‖x ( 0 ) −x ? ‖2 > 0 is sufficiently small . More precisely , lim t→t ? x ( t ) = x ? , where the convergence time t ? < ∞ may depend on which flow is used , but in both cases is upper bounded by t ? ≤ ‖∇f ( x0 ) ‖ 1 θ− 1 θ′ 2 cC 1 θ ( 1− θθ′ ) , ( 14 ) where x0 = x ( 0 ) , C = ( p p−1 ) p−1 p µ 1 p , θ = p−1p , and θ ′ = q−1q . In particular , given any compact and positively invariant subset S ⊂ D , both flows converge in finite time with the aforementioned convergence time upper bound ( which can be tightened by replacing D with S ) for any x0 ∈ S. Furthermore , if D = Rn , then we have global finite-time convergence , i.e . finite-time convergence to any maximal solution ( in the sense of Filippov ) x ( · ) with arbitrary x0 ∈ Rn . In essence , the analysis introduced in Romero & Benosman ( 2020 ) consists of leveraging the gradient dominance to show that the energy function E ( t ) , f ( x ( t ) ) − f ? satisfies the Lyapunov-like differential inequality Ė ( t ) = O ( E ( t ) α ) for some α < 1 . The detailed proof is recalled in Appendix C for completeness . 1For more modern treatments in English , see Łojasiewicz & Zurro ( 1999 ) ; Bolte et al . ( 2007 ) | This paper considers the analysis of two discrete time schemes derived from gradient flows named q-RGF and q-SGF. The topic fits more generally into continuous time perspectives for optimization and the relations between ODE theory and optimization. This is an interesting direction with many interesting promises. | SP:62d9ff0ab002c3a5d47b3cfcf2a645336d3aad2d |
Learning a subspace of policies for online adaptation in Reinforcement Learning | 1 INTRODUCTION . In recent years , Deep Reinforcement Learning ( RL ) has succeeded at solving complex tasks , from defeating humans in board games ( Silver et al. , 2017 ) to complex control problems ( Peng et al. , 2017 ; Schulman et al. , 2017 ) . It relies on different learning algorithms ( e.g. , A2C - ( Mnih et al. , 2016 ) , PPO - ( Schulman et al. , 2017 ) ) . These methods aim at discovering a policy that maximizes the expected ( discounted ) cumulative reward received by an agent given a particular environment . If existing techniques work quite well in the classical setting , considering that the environment at train time and the environment at test time are similar is unrealistic in many practical applications . As an example , when learning to drive a car , a student learns to drive using a particular car , and under specific weather conditions . But at test time , we expect the driver to be able to generalize to any new car , new roads , and new weather conditions . It is critical to consider the generalization issue where one of the challenges is to learn a policy that generalizes and adapts itself to unseen environments . Different techniques have been proposed in the literature ( Section 6 ) to automatically adapt the learned policy to the test environment . In the very large majority of works , the model has access to multiple training environments ( meta-RL setting ) . Therefore , the training algorithm can identify which variations ( or invariants ) may occur at test time and how to adapt quickly to similar variations . But this setting may still be unrealistic for concrete applications : for instance , it supposes that the student will learn to drive on multiple cars before getting their driving license . In this paper , we address a simpler yet harder to tackle generalization setting in which the learning algorithm is trained over one single environment and has to perform well on test environments ; preventing us from using meta-RL approaches . A natural way to attack this setting is to start by learning a single policy using any RL algorithm , and to fine-tune this training policy at test time , over the test environment ( See red/blue points in Figure 1a ) , but this process may be costly in terms of environment interactions . Very recently , the idea of learning a set of diverse yet effective policies ( Kumar et al. , 2020b ; Osa et al. , 2021 ) has emerged as a way to deal with this adaptation setting . The intuition is that , if instead of learning one single policy , one learns a set of ’ diverse ’ policies , then there is a chance that at least one of these policies will perform well over a new dynamics . The adaptation in that case just consists in selecting the best policy in that set by evaluating each policy over few episodes ( K-shot adaptation ) . But the way this set of policies is built and the notion of diversity proposed in these methods have a few drawbacks : these models increase diversity by using an additional intrinsic reward which encourages the different policies to generate different distributions of states . This objective potentially favors the learning of policies that are sub-optimal at train time . Moreover , these approaches make use of an additional component in the policy architecture ( e.g. , a discriminator ) that may be difficult to tune , particularly considering that , at train time , we do not have access to any test environment and thus can not rely on validation techniques to tune the extra architecture . Inspired by recent research on mode connectivity ( Benton et al. , 2021 ; Kuditipudi et al. , 2019 ) and by ( Wortsman et al. , 2021 ) which aims to learn a subspace of models in the supervised learning setting , we propose to learn a subspace of policies in the parameter space as a solution to the online adaptation in the RL setting ( see Figure 1a ) . Each particular point in this subspace corresponds to specific parameter values , and thus to a particular policy . This subspace is learned by adapting a classical RL algorithm ( PPO and A2C in our case , see Section 3.3 ) such that an infinite continuum of policies is learned , each policy having different parameters . The policies thus capture and process information differently , and react differently to variations of the training environment ( see Figure 1b ) . We validate our approach ( Section 5 ) over a large set of reinforcement learning environments and compare it with other existing approaches . These experiments show that our method is competitive , achieves good results and does not require the use of any additional component of hyper-parameters tuning contrarily to baselines . 2 SETTING . Reinforcement Learning : Let us define a state space S and an action spaceA . In the RL setting , one has access to a training Markov Decision Process ( MDP ) denoted M defined by a transition distribution P ( s′|s , a ) : S × A × S → R+ , an initial state distribution P ( i ) ( s ) : S → R+ and a reward function r ( s , a ) : S ×A → R. A policy is defined as πθ ( a|s ) : S × A → R+ , where θ denotes the parameters of the policy . A trajectory sampled by a policy πθ given a MDPM is denoted τ ∼ πθ ( M ) . The objective of an RL algorithm is to find a policy that maximizes the expected cumulative ( discounted ) reward : θ∗ = argmax θ Eτ∼πθ ( M ) [ R ( τ ) ] ( 1 ) where R ( τ ) is the discounted cumulative reward over trajectory τ . Online adaptation : We consider the setting where the policy trained overM will be used over another MDP ( denoted M̄ ) that shares the same state and action space asM , but with a different dynamics and/or initial state distribution and/or reward function1 . Importantly , M̄ is unknown at train time , and can not be used for model selection , making the tuning of hyper-parameters difficult . Given a trained model , we consider the K-shot adaptation setting where the test phase is decomposed into two stages : a first phase in which the model adapts itself to the new test environment over K episodes , and a second phase in which the adapted model is used to collect the reward . We thus expect the first phase to be as short as possible ( few episodes ) , corresponding to a fast adaptation to the new environment . Let us consider that a model πθ generates a sequence of trajectories τ̄1 , τ̄2 , .... , τ̄+∞ over M̄ , the performance of such a model , is defined as : Perf ( πθ , M̄ , K ) = lim T→∞ 1 T T∑ t=1 R ( τ̄K+t ) ( 2 ) which corresponds to the average performance of the policy πθ over M̄ after K episodes used for adapting the policy . Note that we are interested in methods that adapt quickly to new a test environment and we will consider small values of K in our experiments . In the following , for sake of simplicity , K will refer to the number of policies evaluated during adaptation since each policy may be evaluated over more than a single episode when facing stochastic environments . 3 LEARNING SUBSPACES OF POLICIES . Motivation and Idea : To illustrate our idea , let us consider a toy example where the train environment contains states with correlated and redundant features , in such a way that multiple subsets of state features can be used to compute good actions to execute . Traditional RL algorithms will discover one policy πθ∗ that is optimal w.r.t the environment . This policy will typically use the state features in a particular way to decide the optimal action at each step . If some features become noisy ( at test time ) while , unluckily , πθ∗ particularly relies on these noisy features , the performance of the policy will drastically drop . Now , let us consider that , instead of learning just one optimal policy , we also learn a second optimal policy πθ∗′ , but enforcing θ ∗′ to be different than θ∗ . This second policy may tend to make use of various features to compute actions . We thus obtain two policies instead of one , and we have more chances that at least one of these policies is efficient at test time . Identifying which of these two policies is the best for the test environment ( i.e. , adaptation ) can simply be done by evaluating each policy over few episodes , keeping the best one . Our model is built on top of this intuition , extending this example to an infinite set of policies and to variable environment dynamics . Inspired by Wortsman et al . ( 2021 ) proposing to learn a subspace of models for supervised learning , we study the approach of learning a subspace of policies in the parameter space , and the use of such a model for online adaptation in reinforcement learning . Studying the structure of the parameter space has seen a recent surge of interest through the mode connectivity concept ( Benton et al. , 2021 ; Kuditipudi et al. , 2019 ; Wortsman et al. , 2021 ) and obtain good results in generalization , but it has never been involved in the RL setting . As motivated in the previous paragraph , we expect that , given a variation of the training environment , having access to a subspace of policies that process information differently instead of a single policy will facilitate the adaptation . As a result , our method is very simple , does not need any extra hyper-parameter tuning and achieves good performance . 3.1 SUBSPACES OF POLICIES . Given Θ the space of all possible parameters , a subspace of policies is a subset Θ̄ ⊂ Θ that defines a set of corresponding policies Π̄ = { πθ } θ∈Θ̄ . 1In the experimental study , one training environment is associated to multiple test environments to analyze the ability to adapt to different variations . Since our objective is to learn such a subspace , we have to rely on a parametric definition of such a subspace and consider Θ̄ as a simplex in Θ . Let us define N anchor parameter values θ̄1 , .... θ̄N ∈ Θ . We define the Z-space as the set of possible weighted sum of the anchor parameters : Z = { z = ( z1 , ... zN ) ∈ [ 0 , 1 ] N | ∑ zi = 1 } . The subspace we aim to learn is defined by : Θ̄ = { N∑ k=1 zkθ̄k , ∀z ∈ Z } ( 3 ) In other words , we aim to learn a convex hull of N vertices in Θ . Note that policies in this subspace can be obtained by sampling z ∼ p ( z ) uniformly over Z . The advantages of this approach are : a ) the number of parameters of the model can be controlled by choosing the number N of anchor parameters , b ) since policies are sharing parameters ( instead of learning a set of independent policies ) , we can expect that the learning will be sample efficient . Such a subspace is illustrated in Figure 1a through the `` pentagon '' ( i.e. , N = 5 ) in which angles correspond to the anchor parameters and the surface corresponds to all the policies in the built subspace . K-shot adaptation : Given a subspace of policies Θ̄ , different methods can be achieved to find the best policy over the test environment . For instance , it could be done by optimizing the distribution p ( z ) at test time . In this article , we use the same yet effective K-shot adaptation technique than Kumar et al . ( 2020b ) and Osa et al . ( 2021 ) : we sample K episodes using different policies defined by different values of z that are uniformly spread over Z . In our example , it means that we evaluate policies uniformly distributed within the pentagon to identify a good test policy ( blue star ) . Note that , when the environment is deterministic , only one episode per value of z needs to be executed to find the best policy , which leads to a very fast adaptation . | The paper studies how to discover a subset of policies that will become useful for fast adaptation of future tasks. The subspace is defined as the convex hull of some anchor policies which are discovered, and are encouraged to perform well with respect to the extrinsic reward while being as different as possible from each other in the parameter space. The quality of the set is then evaluated by its ability to quickly adapt to new tasks. | SP:de2ca9e0d296137c5ddf68db89b5d0e6b1342031 |
Learning a subspace of policies for online adaptation in Reinforcement Learning | 1 INTRODUCTION . In recent years , Deep Reinforcement Learning ( RL ) has succeeded at solving complex tasks , from defeating humans in board games ( Silver et al. , 2017 ) to complex control problems ( Peng et al. , 2017 ; Schulman et al. , 2017 ) . It relies on different learning algorithms ( e.g. , A2C - ( Mnih et al. , 2016 ) , PPO - ( Schulman et al. , 2017 ) ) . These methods aim at discovering a policy that maximizes the expected ( discounted ) cumulative reward received by an agent given a particular environment . If existing techniques work quite well in the classical setting , considering that the environment at train time and the environment at test time are similar is unrealistic in many practical applications . As an example , when learning to drive a car , a student learns to drive using a particular car , and under specific weather conditions . But at test time , we expect the driver to be able to generalize to any new car , new roads , and new weather conditions . It is critical to consider the generalization issue where one of the challenges is to learn a policy that generalizes and adapts itself to unseen environments . Different techniques have been proposed in the literature ( Section 6 ) to automatically adapt the learned policy to the test environment . In the very large majority of works , the model has access to multiple training environments ( meta-RL setting ) . Therefore , the training algorithm can identify which variations ( or invariants ) may occur at test time and how to adapt quickly to similar variations . But this setting may still be unrealistic for concrete applications : for instance , it supposes that the student will learn to drive on multiple cars before getting their driving license . In this paper , we address a simpler yet harder to tackle generalization setting in which the learning algorithm is trained over one single environment and has to perform well on test environments ; preventing us from using meta-RL approaches . A natural way to attack this setting is to start by learning a single policy using any RL algorithm , and to fine-tune this training policy at test time , over the test environment ( See red/blue points in Figure 1a ) , but this process may be costly in terms of environment interactions . Very recently , the idea of learning a set of diverse yet effective policies ( Kumar et al. , 2020b ; Osa et al. , 2021 ) has emerged as a way to deal with this adaptation setting . The intuition is that , if instead of learning one single policy , one learns a set of ’ diverse ’ policies , then there is a chance that at least one of these policies will perform well over a new dynamics . The adaptation in that case just consists in selecting the best policy in that set by evaluating each policy over few episodes ( K-shot adaptation ) . But the way this set of policies is built and the notion of diversity proposed in these methods have a few drawbacks : these models increase diversity by using an additional intrinsic reward which encourages the different policies to generate different distributions of states . This objective potentially favors the learning of policies that are sub-optimal at train time . Moreover , these approaches make use of an additional component in the policy architecture ( e.g. , a discriminator ) that may be difficult to tune , particularly considering that , at train time , we do not have access to any test environment and thus can not rely on validation techniques to tune the extra architecture . Inspired by recent research on mode connectivity ( Benton et al. , 2021 ; Kuditipudi et al. , 2019 ) and by ( Wortsman et al. , 2021 ) which aims to learn a subspace of models in the supervised learning setting , we propose to learn a subspace of policies in the parameter space as a solution to the online adaptation in the RL setting ( see Figure 1a ) . Each particular point in this subspace corresponds to specific parameter values , and thus to a particular policy . This subspace is learned by adapting a classical RL algorithm ( PPO and A2C in our case , see Section 3.3 ) such that an infinite continuum of policies is learned , each policy having different parameters . The policies thus capture and process information differently , and react differently to variations of the training environment ( see Figure 1b ) . We validate our approach ( Section 5 ) over a large set of reinforcement learning environments and compare it with other existing approaches . These experiments show that our method is competitive , achieves good results and does not require the use of any additional component of hyper-parameters tuning contrarily to baselines . 2 SETTING . Reinforcement Learning : Let us define a state space S and an action spaceA . In the RL setting , one has access to a training Markov Decision Process ( MDP ) denoted M defined by a transition distribution P ( s′|s , a ) : S × A × S → R+ , an initial state distribution P ( i ) ( s ) : S → R+ and a reward function r ( s , a ) : S ×A → R. A policy is defined as πθ ( a|s ) : S × A → R+ , where θ denotes the parameters of the policy . A trajectory sampled by a policy πθ given a MDPM is denoted τ ∼ πθ ( M ) . The objective of an RL algorithm is to find a policy that maximizes the expected cumulative ( discounted ) reward : θ∗ = argmax θ Eτ∼πθ ( M ) [ R ( τ ) ] ( 1 ) where R ( τ ) is the discounted cumulative reward over trajectory τ . Online adaptation : We consider the setting where the policy trained overM will be used over another MDP ( denoted M̄ ) that shares the same state and action space asM , but with a different dynamics and/or initial state distribution and/or reward function1 . Importantly , M̄ is unknown at train time , and can not be used for model selection , making the tuning of hyper-parameters difficult . Given a trained model , we consider the K-shot adaptation setting where the test phase is decomposed into two stages : a first phase in which the model adapts itself to the new test environment over K episodes , and a second phase in which the adapted model is used to collect the reward . We thus expect the first phase to be as short as possible ( few episodes ) , corresponding to a fast adaptation to the new environment . Let us consider that a model πθ generates a sequence of trajectories τ̄1 , τ̄2 , .... , τ̄+∞ over M̄ , the performance of such a model , is defined as : Perf ( πθ , M̄ , K ) = lim T→∞ 1 T T∑ t=1 R ( τ̄K+t ) ( 2 ) which corresponds to the average performance of the policy πθ over M̄ after K episodes used for adapting the policy . Note that we are interested in methods that adapt quickly to new a test environment and we will consider small values of K in our experiments . In the following , for sake of simplicity , K will refer to the number of policies evaluated during adaptation since each policy may be evaluated over more than a single episode when facing stochastic environments . 3 LEARNING SUBSPACES OF POLICIES . Motivation and Idea : To illustrate our idea , let us consider a toy example where the train environment contains states with correlated and redundant features , in such a way that multiple subsets of state features can be used to compute good actions to execute . Traditional RL algorithms will discover one policy πθ∗ that is optimal w.r.t the environment . This policy will typically use the state features in a particular way to decide the optimal action at each step . If some features become noisy ( at test time ) while , unluckily , πθ∗ particularly relies on these noisy features , the performance of the policy will drastically drop . Now , let us consider that , instead of learning just one optimal policy , we also learn a second optimal policy πθ∗′ , but enforcing θ ∗′ to be different than θ∗ . This second policy may tend to make use of various features to compute actions . We thus obtain two policies instead of one , and we have more chances that at least one of these policies is efficient at test time . Identifying which of these two policies is the best for the test environment ( i.e. , adaptation ) can simply be done by evaluating each policy over few episodes , keeping the best one . Our model is built on top of this intuition , extending this example to an infinite set of policies and to variable environment dynamics . Inspired by Wortsman et al . ( 2021 ) proposing to learn a subspace of models for supervised learning , we study the approach of learning a subspace of policies in the parameter space , and the use of such a model for online adaptation in reinforcement learning . Studying the structure of the parameter space has seen a recent surge of interest through the mode connectivity concept ( Benton et al. , 2021 ; Kuditipudi et al. , 2019 ; Wortsman et al. , 2021 ) and obtain good results in generalization , but it has never been involved in the RL setting . As motivated in the previous paragraph , we expect that , given a variation of the training environment , having access to a subspace of policies that process information differently instead of a single policy will facilitate the adaptation . As a result , our method is very simple , does not need any extra hyper-parameter tuning and achieves good performance . 3.1 SUBSPACES OF POLICIES . Given Θ the space of all possible parameters , a subspace of policies is a subset Θ̄ ⊂ Θ that defines a set of corresponding policies Π̄ = { πθ } θ∈Θ̄ . 1In the experimental study , one training environment is associated to multiple test environments to analyze the ability to adapt to different variations . Since our objective is to learn such a subspace , we have to rely on a parametric definition of such a subspace and consider Θ̄ as a simplex in Θ . Let us define N anchor parameter values θ̄1 , .... θ̄N ∈ Θ . We define the Z-space as the set of possible weighted sum of the anchor parameters : Z = { z = ( z1 , ... zN ) ∈ [ 0 , 1 ] N | ∑ zi = 1 } . The subspace we aim to learn is defined by : Θ̄ = { N∑ k=1 zkθ̄k , ∀z ∈ Z } ( 3 ) In other words , we aim to learn a convex hull of N vertices in Θ . Note that policies in this subspace can be obtained by sampling z ∼ p ( z ) uniformly over Z . The advantages of this approach are : a ) the number of parameters of the model can be controlled by choosing the number N of anchor parameters , b ) since policies are sharing parameters ( instead of learning a set of independent policies ) , we can expect that the learning will be sample efficient . Such a subspace is illustrated in Figure 1a through the `` pentagon '' ( i.e. , N = 5 ) in which angles correspond to the anchor parameters and the surface corresponds to all the policies in the built subspace . K-shot adaptation : Given a subspace of policies Θ̄ , different methods can be achieved to find the best policy over the test environment . For instance , it could be done by optimizing the distribution p ( z ) at test time . In this article , we use the same yet effective K-shot adaptation technique than Kumar et al . ( 2020b ) and Osa et al . ( 2021 ) : we sample K episodes using different policies defined by different values of z that are uniformly spread over Z . In our example , it means that we evaluate policies uniformly distributed within the pentagon to identify a good test policy ( blue star ) . Note that , when the environment is deterministic , only one episode per value of z needs to be executed to find the best policy , which leads to a very fast adaptation . | The authors propose a method for rapid adaptation from a single training task to an unseen test-task in reinforcement learning. The method optimizes to find a convex subspace, specifically a line, of parameters that minimize the objective in expectation over a uniform distribution. For adaptation, the authors propose to find a convex combination that performs well on the test task. | SP:de2ca9e0d296137c5ddf68db89b5d0e6b1342031 |
AutoOED: Automated Optimal Experimental Design Platform with Data- and Time-Efficient Multi-Objective Optimization | 1 INTRODUCTION . Optimal Experimental Design ( OED ) problems in science and engineering often require satisfying several conflicting objectives simultaneously . These problems aim to solve a multi-objective optimization system and discover a set of optimal solutions , called Pareto optimal . Furthermore , the objectives are typically black-box functions whose evaluations are time-consuming and costly ( e.g. , measuring real experiments or running expensive numerical simulations ) . Thus , the budget that determines the number of experiments can be heavily constrained . Hence , an efficient strategy for guiding the experimental design towards Pareto optimal solutions is necessary . Recent advances in machine learning have facilitated optimization of various design problems , including chemical design ( Griffiths & Hernández-Lobato , 2017 ) , material design ( Zhang et al. , 2020 ) , resource allocation ( Wu et al. , 2013 ) , environmental monitoring ( Marchant & Ramos , 2012 ) , recommender systems ( Chapelle & Li , 2011 ) and robotics ( Martinez-Cantin et al. , 2009 ) . A machine learning concept that enables automatic guidance of the design process is Bayesian optimization ( Shahriari et al. , 2016 ) . This concept is extensively studied in the machine learning community from a theoretical aspect and in the single-objective case . However , its practical applications in multi-objective problems are still not widely explored due to the lack of easy-to-use and open-source software . In this paper , we present AutoOED1 , an open-source platform for efficiently optimizing multiobjective problems with a restricted budget of experiments . The key features of AutoOED include : • Data-efficient experimentation : AutoOED employs state-of-the-art MOBO strategies that rapidly advances the Pareto front with a small set of evaluated experiments . • Time-efficient experimentation : AutoOED supports both synchronous and asynchronous batch optimization to accelerate the optimization . We propose a novel and robust asynchronous optimization strategy named Believer-Penalizer ( BP ) , which is instrumental when multiple workers run experiments in parallel , but their evaluations drastically vary in time . 1Code , screenshots , detailed documentation and tutorials can be found at https : //sites.google . com/view/autooed . • Intuitive GUI : An easy-to-use graphical user interface ( GUI ) is provided to directly visualize and guide the optimization progress and facilitate the operation for users with little or no experience with coding , optimization , or machine learning . • Modular structure : A highly modular Python codebase enables easy extensions and replacements of MOBO algorithm components . AutoOED can serve as a testbed for machine learning researchers to easily develop and evaluate their own MOBO algorithms . • Automation of experimental design : The platform is designed for straightforward integration into a fully automated experimental design pipeline as long as the experiment evaluations ( either in simulation or physical ) can be controlled via computer programs . 2 RELATED WORK . Bayesian optimal experimental design Optimal experimental design ( OED ) is the process of designing the sequence of experiments to maximize specific objectives in a data- or time-efficient manner . Therefore , Bayesian optimization ( BO ) ( Shahriari et al. , 2016 ) is usually applied to find optimal solutions with a minimal number of evaluations . Essentially , BO relies on surrogate models like the Gaussian process ( GP ) to accurately model the experimental process and proposes new experimental designs based on defined acquisition functions that trade-off between exploration and exploitation . Popular choices of the acquisition functions include Expected Improvement ( EI ) ( Močkus , 1975 ) , Upper Confidence Bound ( UCB ) ( Srinivas et al. , 2010 ) , Thompson Sampling ( TS ) ( Thompson , 1933 ) . Bayesian OED has found success in a wide range of applications ( Greenhill et al. , 2020 ) and is the main methodology of AutoOED . To further speed up when evaluations can be carried out in parallel , asynchronous BO approaches have been developed ( Ginsbourger et al. , 2010 ; Kandasamy et al. , 2018 ; Alvi et al. , 2019 ) . However , all of the previous literature focuses on single-objective BO rather than the multi-objective scenario . In this paper , we extend several single-objective asynchronous BO methods to multi-objective versions and propose a novel asynchronous method named Believer-Penalizer ( BP ) with the stablest performance on multi-objective benchmark problems . Multi-objective Bayesian optimization MOBO is developed to optimize for a set of Pareto optimal solutions while minimizing the number of experimental evaluations . Early approaches solve multi-objective problems by scalarizing them into single-objective ones using random weights ( Knowles , 2006 ) . Instead of scalarization , some acquisition functions are proposed to compute a single objective , e.g. , entropy-based or hypervolume-based ( Russo & Van Roy , 2014 ; Belakaria et al. , 2019 ; Emmerich & Klinkenberg , 2008 ; Daulton et al. , 2020a ) . Alternatively , MOBO can be solved by defining a separate acquisition function per objective , optimizing using cheap multi-objective solvers ( usually evolutionary algorithms like NSGA-II ( Deb et al. , 2002 ) ) and finally selecting one or a batch of designs to evaluate next ( Bradford et al. , 2018 ; Belakaria & Deshwal , 2020 ; Konakovic Lukovic et al. , 2020 ) . AutoOED implements many of them in a modular way and allows easily changing modules in an unified MOBO framework ( see Section 3.2 ) . Open-source Bayesian optimization platform There are many existing Python libraries for Bayesian optimization including Spearmint ( Snoek et al. , 2012 ) , HyperOpt ( Bergstra et al. , 2013 ) , GPyOpt ( authors , 2016 ) , GPflowOpt ( Knudde et al. , 2017 ) , Dragonfly ( Kandasamy et al. , 2020 ) , AX ( Bakshy et al. , 2018 ) , Optuna ( Akiba et al. , 2019 ) , HyperMapper ( Nardi et al. , 2019 ) , BoTorch ( Balandat et al. , 2020a ) , SMAC3 ( Lindauer et al. , 2021 ) and OpenBox ( Li et al. , 2021 ) . These Python libraries are designed for general applications and have different algorithmic features supported . The feature comparison between AutoOED and these libraries is shown in Table 1 and is further discussed in Section 5.2 . However , they are all targeted for experts in coding without an intuitive GUI . In contrast , there are also software platforms that provide intuitive user interface and visualization to specific domain experts but the platforms can not be used for other general applications , for example , Auto-QChem ( Shields et al. , 2021 ) for chemical synthesis and GeoBO ( Haan , 2021 ) for geoscience . Combining powerful Bayesian optimization algorithms and an intuitive GUI , AutoOED is designed to be a general optimization platform that can be easily used by anyone for applications in any field . 2The comparison is based on AutoOED ’ s core features . ” ∼ ” means the package only supports a single multiobjective algorithm rather than a modular multi-objective framework with several state-of-the-art algorithms . 3 DATA-EFFICIENT MULTI-OBJECTIVE OPTIMIZATION 3.1 PROBLEM FORMULATION ℝ ! ℝ '' ! ! ( ! ) Design Space Performance Space Pareto FrontPareto Set ℝ ! ℝ '' ! ! ( ! ) Design Space Performance Space ! ℝ ! ℝ '' ! ! ( ! ) Design Space Performance Space Pareto FrontPareto Set i e aret ro t esi pac Performance pace Optimal experiment design problems involving multiple conflicting objectives can be formulated as a multi-objective optimization on design parameters as data- and time-efficient as possible . More formally , we consider a optimization problem over a set of design variables X ⊂ Rd , called design space . The goal is to simultaneously minimize m ≥ 2 objective functions f1 , ... , fm : X → R. Representing the vector of all objectives as f ( x ) = ( f1 ( x ) , ... , fm ( x ) ) , the performance space is then an m-dimensional image f ( X ) ⊂ Rm . Conflicting objectives result in a set of optimal solutions rather than a single best solution . These optimal solutions are referred to as Pareto set Ps ⊆ X in the design space , and the corresponding images in performance space are Pareto front Pf = f ( Ps ) ⊂ Rm . To measure the quality of an approximated Pareto front , hypervolume ( Zitzler & Thiele , 1999 ) is the most commonly used metric in multi-objective optimization ( Riquelme et al. , 2015 ) . Let Pf be a Pareto front approximation in an m-dimensional performance space and given a reference point r ∈ Rm , the hypervolumeH ( Pf ) is defined asH ( Pf ) = ∫ Rm 1H ( Pf ) ( z ) dz , where H ( Pf ) = { z ∈ Z | ∃1 ≤ i ≤ |Pf | : r z Pf ( i ) } . is the relation operator of objective dominance and 1H ( Pf ) is a Dirac delta function that equals 1 if z ∈ H ( Pf ) and 0 otherwise . The higher the hypervolume , the better Pf approximates the true Pareto front . 3.2 MODULAR ALGORITHM FRAMEWORK . < latexit sha1_base64= '' Zz6iNM3NIi3avy+two8AsfT7ldM= '' > AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mqoMeClx4r2lpoQ9lsJ+3azSbsboQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8IBFcG9f9dgpr6xubW8Xt0s7u3v5B+fCoreNUMWyxWMSqE1CNgktsGW4EdhKFNAoEPgTjm5n/8IRK81jem0mCfkSHkoecUWOlu7D/2C9X3Ko7B1klXk4qkKPZL3/1BjFLI5SGCap113MT42dUGc4ETku9VGNC2ZgOsWuppBFqP5ufOiVnVhmQMFa2pCFz9fdERiOtJ1FgOyNqRnrZm4n/ed3UhNd+xmWSGpRssShMBTExmf1NBlwhM2JiCWWK21sJG1FFmbHplGwI3vLLq6Rdq3oX1drtZaXeyOMowgmcwjl4cAV1aEATWsBgCM/wCm+OcF6cd+dj0Vpw8plj+APn8wdJxI3R < /latexit > Multi-objective Bayesian optimization ( MOBO ) is a data-driven approach that attempts to learn the black-box objective functions f ( x ) from available data and find Pareto optimal solutions in an iterative and data-efficient manner . MOBO typically consists of four core modules : ( i ) an inexpensive surrogate model for the black-box objective functions ; ( ii ) an acquisition function that defines sampling from the surrogate model and trade-off between exploration and exploitation of the design space ; ( iii ) a cheap multi-objective solver to approximate the Pareto set and front ; ( iv ) a selection strategy that proposes a single or a batch of experiments to evaluate next . These four modules ( see Figure 1 ) are implemented as core and independent building blocks of the AutoOEDs , making it highly modularized and easy to develop new algorithms and modules . The whole pipeline starts from a given small dataset or a set of random evaluated samples , then it works iteratively by proposing new design samples and evaluating them until the stopping criterion is met . For each module in this framework , AutoOED supports following choices : • Surrogate model : Gaussian process , neural network ( multi-layer perceptron ) , Bayesian neural network ( DNGO ( Snoek et al. , 2015 ) ) • Acquisition function : Expected Improvement , Probability of Improvement , Upper Confidence Bound , Thompson Sampling , identity function • Multi-objective solver : NSGA-II , MOEA/D , ParetoFrontDiscovery ( Schulz et al. , 2018 ) • Selection : Hypervolume improvement , uncertainty , random , etc . • Stopping criterion : Time , number of evaluations , hypervolume convergence class TSEMO ( MOBO ) : ’ ’ ’ [ Bradford et al . 2018 ] ’ ’ ’ spec = { ’ surrogate ’ : ’ gp ’ , ’ acquisition ’ : ’ ts ’ , ’ solver ’ : ’ nsga2 ’ , ’ selection ’ : ’ hvi ’ , } class USEMO_EI ( MOBO ) : ’ ’ ’ [ Belakaria and Deshwal 2020 ] ’ ’ ’ spec = { ’ surrogate ’ : ’ gp ’ , ’ acquisition ’ : ’ ei ’ , ’ solver ’ : ’ nsga2 ’ , ’ selection ’ : ’ uncertainty ’ , } class DGEMO ( MOBO ) : ’ ’ ’ [ Lukovic et al . 2020 ] ’ ’ ’ spec = { ’ surrogate ’ : ’ gp ’ , ’ acquisition ’ : ’ identity ’ , ’ solver ’ : ’ discovery ’ , ’ selection ’ : ’ direct ’ , } Code Example 1 : Creating algorithms in AutoOED by simply specifying module combinations . Based on this framework , we implement several popular and state-of-the-art MOBO methods , including ParEGO ( Knowles , 2006 ) , MOEA/D-EGO ( Zhang et al. , 2009 ) , TSEMO ( Bradford et al. , 2018 ) , USeMO ( Belakaria & Deshwal , 2020 ) , DGEMO ( Konakovic Lukovic et al. , 2020 ) . DGEMO exhibits state-of-the-art performance for data-efficient , multi-objective problems with batch evaluations to the best of our knowledge . With necessary modules of the MOBO framework implemented , the algorithms can be easily composed by specifying the choice of each module and inheriting the base class MOBO , see Code Example 1 . Supported choices of each module can be found in our documentation . Users can select an algorithm from this library that best fits the characteristics of their physical system or optimization goals , or they can easily create new algorithms by specifying novel combinations of existing modules in just a few lines of code . | The paper presents a package for black-box optimization and is specifically designed for the optimization of experimental designs. To this end, the authors build upon multi-objective Bayesian Optimization which allows to obtain good points within a few function evaluations and also enables to obtain a Pareto-Front of non-dominated points. To have an efficient, asynchronous parallelization, the authors propose Believer-Penalizer for choosing the next point while others are still being evaluated. The main idea is to choose either Kriging Believer (KB) and Local Penalization (LP) based on a threshold on the uncertainty in the posterior distribution of the probabilistic surrogate model. In the experiments, the authors show that their package achieves often a fairly good performance. | SP:3f482ef803d5de09018a4b1f8e120320ef1622d0 |
AutoOED: Automated Optimal Experimental Design Platform with Data- and Time-Efficient Multi-Objective Optimization | 1 INTRODUCTION . Optimal Experimental Design ( OED ) problems in science and engineering often require satisfying several conflicting objectives simultaneously . These problems aim to solve a multi-objective optimization system and discover a set of optimal solutions , called Pareto optimal . Furthermore , the objectives are typically black-box functions whose evaluations are time-consuming and costly ( e.g. , measuring real experiments or running expensive numerical simulations ) . Thus , the budget that determines the number of experiments can be heavily constrained . Hence , an efficient strategy for guiding the experimental design towards Pareto optimal solutions is necessary . Recent advances in machine learning have facilitated optimization of various design problems , including chemical design ( Griffiths & Hernández-Lobato , 2017 ) , material design ( Zhang et al. , 2020 ) , resource allocation ( Wu et al. , 2013 ) , environmental monitoring ( Marchant & Ramos , 2012 ) , recommender systems ( Chapelle & Li , 2011 ) and robotics ( Martinez-Cantin et al. , 2009 ) . A machine learning concept that enables automatic guidance of the design process is Bayesian optimization ( Shahriari et al. , 2016 ) . This concept is extensively studied in the machine learning community from a theoretical aspect and in the single-objective case . However , its practical applications in multi-objective problems are still not widely explored due to the lack of easy-to-use and open-source software . In this paper , we present AutoOED1 , an open-source platform for efficiently optimizing multiobjective problems with a restricted budget of experiments . The key features of AutoOED include : • Data-efficient experimentation : AutoOED employs state-of-the-art MOBO strategies that rapidly advances the Pareto front with a small set of evaluated experiments . • Time-efficient experimentation : AutoOED supports both synchronous and asynchronous batch optimization to accelerate the optimization . We propose a novel and robust asynchronous optimization strategy named Believer-Penalizer ( BP ) , which is instrumental when multiple workers run experiments in parallel , but their evaluations drastically vary in time . 1Code , screenshots , detailed documentation and tutorials can be found at https : //sites.google . com/view/autooed . • Intuitive GUI : An easy-to-use graphical user interface ( GUI ) is provided to directly visualize and guide the optimization progress and facilitate the operation for users with little or no experience with coding , optimization , or machine learning . • Modular structure : A highly modular Python codebase enables easy extensions and replacements of MOBO algorithm components . AutoOED can serve as a testbed for machine learning researchers to easily develop and evaluate their own MOBO algorithms . • Automation of experimental design : The platform is designed for straightforward integration into a fully automated experimental design pipeline as long as the experiment evaluations ( either in simulation or physical ) can be controlled via computer programs . 2 RELATED WORK . Bayesian optimal experimental design Optimal experimental design ( OED ) is the process of designing the sequence of experiments to maximize specific objectives in a data- or time-efficient manner . Therefore , Bayesian optimization ( BO ) ( Shahriari et al. , 2016 ) is usually applied to find optimal solutions with a minimal number of evaluations . Essentially , BO relies on surrogate models like the Gaussian process ( GP ) to accurately model the experimental process and proposes new experimental designs based on defined acquisition functions that trade-off between exploration and exploitation . Popular choices of the acquisition functions include Expected Improvement ( EI ) ( Močkus , 1975 ) , Upper Confidence Bound ( UCB ) ( Srinivas et al. , 2010 ) , Thompson Sampling ( TS ) ( Thompson , 1933 ) . Bayesian OED has found success in a wide range of applications ( Greenhill et al. , 2020 ) and is the main methodology of AutoOED . To further speed up when evaluations can be carried out in parallel , asynchronous BO approaches have been developed ( Ginsbourger et al. , 2010 ; Kandasamy et al. , 2018 ; Alvi et al. , 2019 ) . However , all of the previous literature focuses on single-objective BO rather than the multi-objective scenario . In this paper , we extend several single-objective asynchronous BO methods to multi-objective versions and propose a novel asynchronous method named Believer-Penalizer ( BP ) with the stablest performance on multi-objective benchmark problems . Multi-objective Bayesian optimization MOBO is developed to optimize for a set of Pareto optimal solutions while minimizing the number of experimental evaluations . Early approaches solve multi-objective problems by scalarizing them into single-objective ones using random weights ( Knowles , 2006 ) . Instead of scalarization , some acquisition functions are proposed to compute a single objective , e.g. , entropy-based or hypervolume-based ( Russo & Van Roy , 2014 ; Belakaria et al. , 2019 ; Emmerich & Klinkenberg , 2008 ; Daulton et al. , 2020a ) . Alternatively , MOBO can be solved by defining a separate acquisition function per objective , optimizing using cheap multi-objective solvers ( usually evolutionary algorithms like NSGA-II ( Deb et al. , 2002 ) ) and finally selecting one or a batch of designs to evaluate next ( Bradford et al. , 2018 ; Belakaria & Deshwal , 2020 ; Konakovic Lukovic et al. , 2020 ) . AutoOED implements many of them in a modular way and allows easily changing modules in an unified MOBO framework ( see Section 3.2 ) . Open-source Bayesian optimization platform There are many existing Python libraries for Bayesian optimization including Spearmint ( Snoek et al. , 2012 ) , HyperOpt ( Bergstra et al. , 2013 ) , GPyOpt ( authors , 2016 ) , GPflowOpt ( Knudde et al. , 2017 ) , Dragonfly ( Kandasamy et al. , 2020 ) , AX ( Bakshy et al. , 2018 ) , Optuna ( Akiba et al. , 2019 ) , HyperMapper ( Nardi et al. , 2019 ) , BoTorch ( Balandat et al. , 2020a ) , SMAC3 ( Lindauer et al. , 2021 ) and OpenBox ( Li et al. , 2021 ) . These Python libraries are designed for general applications and have different algorithmic features supported . The feature comparison between AutoOED and these libraries is shown in Table 1 and is further discussed in Section 5.2 . However , they are all targeted for experts in coding without an intuitive GUI . In contrast , there are also software platforms that provide intuitive user interface and visualization to specific domain experts but the platforms can not be used for other general applications , for example , Auto-QChem ( Shields et al. , 2021 ) for chemical synthesis and GeoBO ( Haan , 2021 ) for geoscience . Combining powerful Bayesian optimization algorithms and an intuitive GUI , AutoOED is designed to be a general optimization platform that can be easily used by anyone for applications in any field . 2The comparison is based on AutoOED ’ s core features . ” ∼ ” means the package only supports a single multiobjective algorithm rather than a modular multi-objective framework with several state-of-the-art algorithms . 3 DATA-EFFICIENT MULTI-OBJECTIVE OPTIMIZATION 3.1 PROBLEM FORMULATION ℝ ! ℝ '' ! ! ( ! ) Design Space Performance Space Pareto FrontPareto Set ℝ ! ℝ '' ! ! ( ! ) Design Space Performance Space ! ℝ ! ℝ '' ! ! ( ! ) Design Space Performance Space Pareto FrontPareto Set i e aret ro t esi pac Performance pace Optimal experiment design problems involving multiple conflicting objectives can be formulated as a multi-objective optimization on design parameters as data- and time-efficient as possible . More formally , we consider a optimization problem over a set of design variables X ⊂ Rd , called design space . The goal is to simultaneously minimize m ≥ 2 objective functions f1 , ... , fm : X → R. Representing the vector of all objectives as f ( x ) = ( f1 ( x ) , ... , fm ( x ) ) , the performance space is then an m-dimensional image f ( X ) ⊂ Rm . Conflicting objectives result in a set of optimal solutions rather than a single best solution . These optimal solutions are referred to as Pareto set Ps ⊆ X in the design space , and the corresponding images in performance space are Pareto front Pf = f ( Ps ) ⊂ Rm . To measure the quality of an approximated Pareto front , hypervolume ( Zitzler & Thiele , 1999 ) is the most commonly used metric in multi-objective optimization ( Riquelme et al. , 2015 ) . Let Pf be a Pareto front approximation in an m-dimensional performance space and given a reference point r ∈ Rm , the hypervolumeH ( Pf ) is defined asH ( Pf ) = ∫ Rm 1H ( Pf ) ( z ) dz , where H ( Pf ) = { z ∈ Z | ∃1 ≤ i ≤ |Pf | : r z Pf ( i ) } . is the relation operator of objective dominance and 1H ( Pf ) is a Dirac delta function that equals 1 if z ∈ H ( Pf ) and 0 otherwise . The higher the hypervolume , the better Pf approximates the true Pareto front . 3.2 MODULAR ALGORITHM FRAMEWORK . < latexit sha1_base64= '' Zz6iNM3NIi3avy+two8AsfT7ldM= '' > AAAB6nicbVBNS8NAEJ3Ur1q/qh69LBbBU0mqoMeClx4r2lpoQ9lsJ+3azSbsboQS+hO8eFDEq7/Im//GbZuDtj4YeLw3w8y8IBFcG9f9dgpr6xubW8Xt0s7u3v5B+fCoreNUMWyxWMSqE1CNgktsGW4EdhKFNAoEPgTjm5n/8IRK81jem0mCfkSHkoecUWOlu7D/2C9X3Ko7B1klXk4qkKPZL3/1BjFLI5SGCap113MT42dUGc4ETku9VGNC2ZgOsWuppBFqP5ufOiVnVhmQMFa2pCFz9fdERiOtJ1FgOyNqRnrZm4n/ed3UhNd+xmWSGpRssShMBTExmf1NBlwhM2JiCWWK21sJG1FFmbHplGwI3vLLq6Rdq3oX1drtZaXeyOMowgmcwjl4cAV1aEATWsBgCM/wCm+OcF6cd+dj0Vpw8plj+APn8wdJxI3R < /latexit > Multi-objective Bayesian optimization ( MOBO ) is a data-driven approach that attempts to learn the black-box objective functions f ( x ) from available data and find Pareto optimal solutions in an iterative and data-efficient manner . MOBO typically consists of four core modules : ( i ) an inexpensive surrogate model for the black-box objective functions ; ( ii ) an acquisition function that defines sampling from the surrogate model and trade-off between exploration and exploitation of the design space ; ( iii ) a cheap multi-objective solver to approximate the Pareto set and front ; ( iv ) a selection strategy that proposes a single or a batch of experiments to evaluate next . These four modules ( see Figure 1 ) are implemented as core and independent building blocks of the AutoOEDs , making it highly modularized and easy to develop new algorithms and modules . The whole pipeline starts from a given small dataset or a set of random evaluated samples , then it works iteratively by proposing new design samples and evaluating them until the stopping criterion is met . For each module in this framework , AutoOED supports following choices : • Surrogate model : Gaussian process , neural network ( multi-layer perceptron ) , Bayesian neural network ( DNGO ( Snoek et al. , 2015 ) ) • Acquisition function : Expected Improvement , Probability of Improvement , Upper Confidence Bound , Thompson Sampling , identity function • Multi-objective solver : NSGA-II , MOEA/D , ParetoFrontDiscovery ( Schulz et al. , 2018 ) • Selection : Hypervolume improvement , uncertainty , random , etc . • Stopping criterion : Time , number of evaluations , hypervolume convergence class TSEMO ( MOBO ) : ’ ’ ’ [ Bradford et al . 2018 ] ’ ’ ’ spec = { ’ surrogate ’ : ’ gp ’ , ’ acquisition ’ : ’ ts ’ , ’ solver ’ : ’ nsga2 ’ , ’ selection ’ : ’ hvi ’ , } class USEMO_EI ( MOBO ) : ’ ’ ’ [ Belakaria and Deshwal 2020 ] ’ ’ ’ spec = { ’ surrogate ’ : ’ gp ’ , ’ acquisition ’ : ’ ei ’ , ’ solver ’ : ’ nsga2 ’ , ’ selection ’ : ’ uncertainty ’ , } class DGEMO ( MOBO ) : ’ ’ ’ [ Lukovic et al . 2020 ] ’ ’ ’ spec = { ’ surrogate ’ : ’ gp ’ , ’ acquisition ’ : ’ identity ’ , ’ solver ’ : ’ discovery ’ , ’ selection ’ : ’ direct ’ , } Code Example 1 : Creating algorithms in AutoOED by simply specifying module combinations . Based on this framework , we implement several popular and state-of-the-art MOBO methods , including ParEGO ( Knowles , 2006 ) , MOEA/D-EGO ( Zhang et al. , 2009 ) , TSEMO ( Bradford et al. , 2018 ) , USeMO ( Belakaria & Deshwal , 2020 ) , DGEMO ( Konakovic Lukovic et al. , 2020 ) . DGEMO exhibits state-of-the-art performance for data-efficient , multi-objective problems with batch evaluations to the best of our knowledge . With necessary modules of the MOBO framework implemented , the algorithms can be easily composed by specifying the choice of each module and inheriting the base class MOBO , see Code Example 1 . Supported choices of each module can be found in our documentation . Users can select an algorithm from this library that best fits the characteristics of their physical system or optimization goals , or they can easily create new algorithms by specifying novel combinations of existing modules in just a few lines of code . | In this paper, the authors present AutoOED, an open-source platform for efficiently optimizing multiobjective problems (MO) with a restricted budget of experiments. The platform automatically guides the design of experiments to be evaluated. AutoOED is built upon multi-objective Bayesian optimization (MOBO). To accelerate the optimization in a time-efficient manner, the authors propose a strategy called Believer-Penalizer (BP) that allows batch experiments to be accelerated asynchronously without affecting performance. The authors also provide a graphical user interface (GUI) for users to visualize and guide the experiment design intuitively. Finally, we demonstrate that AutoOED can control and guide real-world hardware experiments in a fully automated way without human intervention. Finally, the authors demonstrate that AutoOED can control and guide real-world hardware experiments in a fully automated way without human intervention. | SP:3f482ef803d5de09018a4b1f8e120320ef1622d0 |
Policy improvement by planning with Gumbel | 1 INTRODUCTION . In 2018 , AlphaZero ( Silver et al. , 2018 ) demonstrated a single algorithm achieving state-of-the-art results on Go , chess , and Shogi . The community reacted quickly . Leela Chess Zero ( Linscott et al. , 2018 ) was created to reproduce AlphaZero results on chess , winning Top Chess Engine Championship in 2019 . Soon , all top-rated classical chess engines replaced traditional evaluations functions with Efficiently Updatable Neural Network ( Nasu , 2018 ) . AlphaZero was itself generalized by MuZero ( Schrittwieser et al. , 2020 ) . While AlphaZero requires a black-box model of the environment , MuZero learns an abstract model of the environment . Essentially , MuZero learns the rules of Go , chess , and Shogi from interactions with the environment . This allows MuZero to excel also at Atari and continuous control from pixels ( Hubert et al. , 2021 ) . In this work , we redesign and improve AlphaZero . In particular , we consider the mechanisms by which AlphaZero selects and uses actions , which are based upon a variety of heuristic ideas that have proven especially effective in Go , chess , and Atari ( Silver et al. , 2018 ; Schrittwieser et al. , 2020 ) . However when using a small number of simulations , some of AlphaZero ’ s mechanisms perform poorly . We use the principle of policy improvement to suggest new mechanisms with a better theoretical foundation . More specifically , we consider each mechanism in turn , alongside our proposed modifications : • Selecting actions to search at the root node . To explore different actions during training , AlphaZero selects actions by adding Dirichlet noise to its policy network , and then performs a search using the perturbed policy . However , this does not ensure a policy improvement . We instead propose to sample actions without replacement by using the Gumbel-Top-k trick ( Section 2 ) , and perform a search using the same Gumbel values to influence the selection of the best action ( Section 3.3 ) , and show that this guarantees a policy improvement when action-values are correctly evaluated . • Selecting actions at the root node . AlphaZero uses a variant of the PUCB algorithm ( Rosin , 2011 ) to select actions at the root node . This algorithm was designed to optimize cumulative regret in a bandit-with-predictor setting ( i.e . given prior recommendations from the policy network ) . However , no ancestors are dependent upon the evaluation of the root node , and the performance of the Monte-Carlo tree search therefore only depends upon the final recommended action at the root node , and not upon the intermediate actions selected during search ( Bubeck et al. , 2011 ) . Consequently , we propose to use the Sequential Halving algorithm ( Karnin et al. , 2013 ) at the root node to optimize simple regret in a stochastic bandit with a predictor ( Section 3.4 ) . • Selecting actions in the environment . Once search is complete , AlphaZero selects an action by sampling an ( annealed ) softmax distribution based upon the visit counts of root actions resulting from the search procedure . We instead propose to select the singular action resulting from the Sequential Halving search procedure . • Policy network update . AlphaZero updates its policy network towards a softmax distribu- tion based upon the visit counts of root actions . However , even if the considered actions are correctly evaluated , this does not guarantee a policy improvement , especially when using small numbers of simulations ( Grill et al. , 2020 ) . We instead propose a policy improvement based upon the root action values computed during search , and update the policy network towards that policy improvement ( Section 4 ) . • Selecting actions at non-root nodes . AlphaZero uses the PUCT algorithm to select actions at non-root nodes . We instead propose to select actions according to a policy improvement ( similar to the proposal of Grill et al . ( 2020 ) ) based upon a completion of the action values . Furthermore , rather than sampling directly from this policy improvement , we propose a deterministic action selection procedure that matches the empirical visit counts to the desired policy improvement ( Section 5 ) . The proposed modifications are applicable also to MuZero or any agent with a policy network and an expensive Q-network . The modifications are most helpful when using a small number of simulations , relative to the number of actions . When using a large number of simulations , AlphaZero works well . We tried to ensure that the new search is principled , better with a smaller number of simulations , and never worse . We succeeded on all tested domains : Go , chess , and Atari . 2 BACKGROUND . Before explaining the improved search , we will explain the Gumbel-Max trick and the Gumbel-Topk trick . The Gumbel-Max trick was popularized by Gumbel-Softmax for a gradient approximation . In this paper , we are not interested in approximate gradients . Instead , we use the Gumbel-Top-k trick to sample without replacement . Gumbel-Max trick . ( Gumbel , 1954 ; Luce , 1959 ; Maddison et al. , 2017 ; Jang et al. , 2017 ) Let π be a categorical distribution with logits ∈ Rk , such that logits ( a ) is the logit of the action a . We can obtain a sample A from the distribution π by first generating a vector of k Gumbel variables and then taking argmax : ( g ∈ Rk ) ∼ Gumbel ( 0 ) ( 1 ) A = argmax a ( g ( a ) + logits ( a ) ) . ( 2 ) Gumbel-Top-k trick . ( Yellott , 1977 ; Vieira , 2014 ; Kool et al. , 2019 ) The Gumbel-Max trick can be generalized to sampling n actions without replacement , by taking n top actions : ( g ∈ Rk ) ∼ Gumbel ( 0 ) ( 3 ) A1 = argmax a ( g ( a ) + logits ( a ) ) ( 4 ) ... An = argmax a/∈ { A1 , ... , An−1 } ( g ( a ) + logits ( a ) ) . ( 5 ) We will denote the set of n top actions by argtop ( g + logits , n ) = { A1 , A2 , . . . , An } . 3 PLANNING AT THE ROOT . We are interested in improving AlphaZero Monte-Carlo Tree Search ( MCTS ) . In this section we will focus on the action selection at the root of the search tree . 3.1 PROBLEM SETTING . Both AlphaZero and MuZero have access to a policy network . At the root of the search tree , they can explore n simulations , before selecting an action for the real environment . We will formalize the problem as a deterministic bandit with a predictor and we will later extend it to a stochastic bandit and MCTS . Bandit . A k-armed deterministic bandit is a vector of Q-values q ∈ Rk , such that q ( a ) is the Q-value of the action a . The agent interacts with the bandit in n simulations ( aka rounds ) . In each simulation t ∈ { 1 , . . . , n } , the agent selects an action At ∈ { 0 , . . . , k − 1 } and visits the action to observe the Q-value q ( At ) . The objective is to maximize the Q-value from a special last action An+1 . That means we want to maximize E [ q ( An+1 ) ] . This objective is equivalent to minimization of simple regret . The simple regret differs from the cumulative regret from all n simulations . Bubeck et al . ( 2011 ) , Hay & Russell ( 2011 ) , and Tolpin & Shimony ( 2012 ) already argued that at the root of the search tree we care about the simple regret . The problem becomes interesting when the number of possible actions is larger than the number of simulations , i.e. , when k > n. For example , 19x19 Go has 362 possible actions and we will do experiments with as few as n = 2 simulations . Fortunately , the policy network can help . Predictor . In the bandit-with-predictor setting ( Rosin , 2011 ) , the agent is equipped with a predictor : the policy network . Before any interaction with the bandit , the policy network predicts the best action by producing a probability distribution π . The agent can use the policy network predictions to make more informed decisions . Policy improvement . Naturally , we would like to have an agent that acts better than , or as well as , the policy network . We would like to obtain a policy improvement . If the agent ’ s action selection produces a policy improvement , then E [ q ( An+1 ) ] ≥ ∑ a π ( a ) q ( a ) , ( 6 ) where the probability π ( a ) is the policy network prediction for the action a.1 The policy network can then keep improving by modeling an improved policy . 3.2 MOTIVATING COUNTEREXAMPLE . We will show that the commonly used heuristics fail to produce a policy improvement . Example 1 . Acting with the best action from the top-n most probable actions fails to produce a policy improvement . Let ’ s demonstrate that . Let q = ( 0 , 0 , 1 ) be the Q-values and let π = ( 0.5 , 0.3 , 0.2 ) be the probabilities produced by the policy network . The value of the policy network is ∑ a π ( a ) q ( a ) = 0.2 . For n = 2 simulations , the set of the most probable actions is { 0 , 1 } . With that , the heuristic would select An+1 = argmaxa∈ { 0,1 } q ( a ) . The expected value of such action is E [ q ( An+1 ) ] = 0 , which is worse than the value of the policy network . You can find other counterexamples by generating random q and π vectors and testing the policy improvement ( Inequality 6 ) . The AlphaZero action selection is explained in Appendix A . 3.3 PLANNING WITH GUMBEL . We will design a policy improvement algorithm for the deterministic bandit with a predictor π . After n simulations , the algorithm should propose an action An+1 with E [ q ( An+1 ) ] ≥ ∑ a π ( a ) q ( a ) . One possibility is to sample n actions from π , and then to select from the sampled actions the action with the highest q ( a ) . Instead of sampling with replacement , we can reduce the variance by sampling without replacement . Still , the sampled actions contain a limited amount of information about π . We should exploit the knowledge of π and its logits when selecting An+1 . The main idea is to sample n actions without replacement by using the Gumbel-Top-k trick , and then to use the same Gumbel g to select the action with the highest g ( a ) + logits ( a ) + σ ( q ( a ) ) . The σ can be any monotonically increasing transformation . The pseudocode for the algorithm is in Algorithm 1 . 1Inequality 6 can be strict , if we assume that an action has a positive advantage and its π ( a ) > 0 . Algorithm 1 Policy Improvement by Planning with Gumbel Require : k : number of actions . Require : n ≤ k : number of simulations . Require : logits ∈ Rk : predictor logits from a policy network π . Sample k Gumbel variables : ( g ∈ Rk ) ∼ Gumbel ( 0 ) Find n actions with the highest g ( a ) + logits ( a ) : Atopn = argtop ( g + logits , n ) Get q ( a ) for each a ∈ Atopn by visiting the actions . From the Atopn actions , find the action with the highest g ( a ) + logits ( a ) + σ ( q ( a ) ) : An+1 = argmaxa∈Atopn ( g ( a ) + logits ( a ) + σ ( q ( a ) ) ) return An+1 The algorithm produces a policy improvement , because q ( argmax a∈Atopn ( g ( a ) + logits ( a ) + σ ( q ( a ) ) ) ) ≥ q ( argmax a∈Atopn ( g ( a ) + logits ( a ) ) ) . ( 7 ) This holds for any Gumbel g , so it holds also for expectations : E [ q ( An+1 ) ] ≥ EA∼π [ q ( A ) ] . The argmaxa∈Atopn ( g ( a ) + logits ( a ) ) is equivalent to sampling from the policy network π ( see the Gumbel-Max trick ) . By using the same Gumbel vector g in the argtop and argmax , we avoid a double-counting bias . The prior knowledge contained in the logits can help on partially observable environments , or when working with approximate or stochastic Q-values . | The paper proposes a number of principled algorithmic modifications to state-of-the-art planning algorithms (AlphaZero, MuZero) for improving performance in settings with many actions and a relatively small computation and / or sample budget. The main contributions are algorithmic and empirical. The key ideas include the use of the Gumbel-max and top-k tricks along with the use of sequential halving to improve online planning. The paper also proposes a planning-learning loop wherein a policy using the estimated (completed) Q values is learned as well as a different selection policy at non-root nodes. The experiments show that the Gumbel variants of AlphaZero and MuZero perform well in low search budget settings in the domains of Go, Chess and Atari. | SP:10eb3473230595eec1d5056bdc904d1852f791a0 |
Policy improvement by planning with Gumbel | 1 INTRODUCTION . In 2018 , AlphaZero ( Silver et al. , 2018 ) demonstrated a single algorithm achieving state-of-the-art results on Go , chess , and Shogi . The community reacted quickly . Leela Chess Zero ( Linscott et al. , 2018 ) was created to reproduce AlphaZero results on chess , winning Top Chess Engine Championship in 2019 . Soon , all top-rated classical chess engines replaced traditional evaluations functions with Efficiently Updatable Neural Network ( Nasu , 2018 ) . AlphaZero was itself generalized by MuZero ( Schrittwieser et al. , 2020 ) . While AlphaZero requires a black-box model of the environment , MuZero learns an abstract model of the environment . Essentially , MuZero learns the rules of Go , chess , and Shogi from interactions with the environment . This allows MuZero to excel also at Atari and continuous control from pixels ( Hubert et al. , 2021 ) . In this work , we redesign and improve AlphaZero . In particular , we consider the mechanisms by which AlphaZero selects and uses actions , which are based upon a variety of heuristic ideas that have proven especially effective in Go , chess , and Atari ( Silver et al. , 2018 ; Schrittwieser et al. , 2020 ) . However when using a small number of simulations , some of AlphaZero ’ s mechanisms perform poorly . We use the principle of policy improvement to suggest new mechanisms with a better theoretical foundation . More specifically , we consider each mechanism in turn , alongside our proposed modifications : • Selecting actions to search at the root node . To explore different actions during training , AlphaZero selects actions by adding Dirichlet noise to its policy network , and then performs a search using the perturbed policy . However , this does not ensure a policy improvement . We instead propose to sample actions without replacement by using the Gumbel-Top-k trick ( Section 2 ) , and perform a search using the same Gumbel values to influence the selection of the best action ( Section 3.3 ) , and show that this guarantees a policy improvement when action-values are correctly evaluated . • Selecting actions at the root node . AlphaZero uses a variant of the PUCB algorithm ( Rosin , 2011 ) to select actions at the root node . This algorithm was designed to optimize cumulative regret in a bandit-with-predictor setting ( i.e . given prior recommendations from the policy network ) . However , no ancestors are dependent upon the evaluation of the root node , and the performance of the Monte-Carlo tree search therefore only depends upon the final recommended action at the root node , and not upon the intermediate actions selected during search ( Bubeck et al. , 2011 ) . Consequently , we propose to use the Sequential Halving algorithm ( Karnin et al. , 2013 ) at the root node to optimize simple regret in a stochastic bandit with a predictor ( Section 3.4 ) . • Selecting actions in the environment . Once search is complete , AlphaZero selects an action by sampling an ( annealed ) softmax distribution based upon the visit counts of root actions resulting from the search procedure . We instead propose to select the singular action resulting from the Sequential Halving search procedure . • Policy network update . AlphaZero updates its policy network towards a softmax distribu- tion based upon the visit counts of root actions . However , even if the considered actions are correctly evaluated , this does not guarantee a policy improvement , especially when using small numbers of simulations ( Grill et al. , 2020 ) . We instead propose a policy improvement based upon the root action values computed during search , and update the policy network towards that policy improvement ( Section 4 ) . • Selecting actions at non-root nodes . AlphaZero uses the PUCT algorithm to select actions at non-root nodes . We instead propose to select actions according to a policy improvement ( similar to the proposal of Grill et al . ( 2020 ) ) based upon a completion of the action values . Furthermore , rather than sampling directly from this policy improvement , we propose a deterministic action selection procedure that matches the empirical visit counts to the desired policy improvement ( Section 5 ) . The proposed modifications are applicable also to MuZero or any agent with a policy network and an expensive Q-network . The modifications are most helpful when using a small number of simulations , relative to the number of actions . When using a large number of simulations , AlphaZero works well . We tried to ensure that the new search is principled , better with a smaller number of simulations , and never worse . We succeeded on all tested domains : Go , chess , and Atari . 2 BACKGROUND . Before explaining the improved search , we will explain the Gumbel-Max trick and the Gumbel-Topk trick . The Gumbel-Max trick was popularized by Gumbel-Softmax for a gradient approximation . In this paper , we are not interested in approximate gradients . Instead , we use the Gumbel-Top-k trick to sample without replacement . Gumbel-Max trick . ( Gumbel , 1954 ; Luce , 1959 ; Maddison et al. , 2017 ; Jang et al. , 2017 ) Let π be a categorical distribution with logits ∈ Rk , such that logits ( a ) is the logit of the action a . We can obtain a sample A from the distribution π by first generating a vector of k Gumbel variables and then taking argmax : ( g ∈ Rk ) ∼ Gumbel ( 0 ) ( 1 ) A = argmax a ( g ( a ) + logits ( a ) ) . ( 2 ) Gumbel-Top-k trick . ( Yellott , 1977 ; Vieira , 2014 ; Kool et al. , 2019 ) The Gumbel-Max trick can be generalized to sampling n actions without replacement , by taking n top actions : ( g ∈ Rk ) ∼ Gumbel ( 0 ) ( 3 ) A1 = argmax a ( g ( a ) + logits ( a ) ) ( 4 ) ... An = argmax a/∈ { A1 , ... , An−1 } ( g ( a ) + logits ( a ) ) . ( 5 ) We will denote the set of n top actions by argtop ( g + logits , n ) = { A1 , A2 , . . . , An } . 3 PLANNING AT THE ROOT . We are interested in improving AlphaZero Monte-Carlo Tree Search ( MCTS ) . In this section we will focus on the action selection at the root of the search tree . 3.1 PROBLEM SETTING . Both AlphaZero and MuZero have access to a policy network . At the root of the search tree , they can explore n simulations , before selecting an action for the real environment . We will formalize the problem as a deterministic bandit with a predictor and we will later extend it to a stochastic bandit and MCTS . Bandit . A k-armed deterministic bandit is a vector of Q-values q ∈ Rk , such that q ( a ) is the Q-value of the action a . The agent interacts with the bandit in n simulations ( aka rounds ) . In each simulation t ∈ { 1 , . . . , n } , the agent selects an action At ∈ { 0 , . . . , k − 1 } and visits the action to observe the Q-value q ( At ) . The objective is to maximize the Q-value from a special last action An+1 . That means we want to maximize E [ q ( An+1 ) ] . This objective is equivalent to minimization of simple regret . The simple regret differs from the cumulative regret from all n simulations . Bubeck et al . ( 2011 ) , Hay & Russell ( 2011 ) , and Tolpin & Shimony ( 2012 ) already argued that at the root of the search tree we care about the simple regret . The problem becomes interesting when the number of possible actions is larger than the number of simulations , i.e. , when k > n. For example , 19x19 Go has 362 possible actions and we will do experiments with as few as n = 2 simulations . Fortunately , the policy network can help . Predictor . In the bandit-with-predictor setting ( Rosin , 2011 ) , the agent is equipped with a predictor : the policy network . Before any interaction with the bandit , the policy network predicts the best action by producing a probability distribution π . The agent can use the policy network predictions to make more informed decisions . Policy improvement . Naturally , we would like to have an agent that acts better than , or as well as , the policy network . We would like to obtain a policy improvement . If the agent ’ s action selection produces a policy improvement , then E [ q ( An+1 ) ] ≥ ∑ a π ( a ) q ( a ) , ( 6 ) where the probability π ( a ) is the policy network prediction for the action a.1 The policy network can then keep improving by modeling an improved policy . 3.2 MOTIVATING COUNTEREXAMPLE . We will show that the commonly used heuristics fail to produce a policy improvement . Example 1 . Acting with the best action from the top-n most probable actions fails to produce a policy improvement . Let ’ s demonstrate that . Let q = ( 0 , 0 , 1 ) be the Q-values and let π = ( 0.5 , 0.3 , 0.2 ) be the probabilities produced by the policy network . The value of the policy network is ∑ a π ( a ) q ( a ) = 0.2 . For n = 2 simulations , the set of the most probable actions is { 0 , 1 } . With that , the heuristic would select An+1 = argmaxa∈ { 0,1 } q ( a ) . The expected value of such action is E [ q ( An+1 ) ] = 0 , which is worse than the value of the policy network . You can find other counterexamples by generating random q and π vectors and testing the policy improvement ( Inequality 6 ) . The AlphaZero action selection is explained in Appendix A . 3.3 PLANNING WITH GUMBEL . We will design a policy improvement algorithm for the deterministic bandit with a predictor π . After n simulations , the algorithm should propose an action An+1 with E [ q ( An+1 ) ] ≥ ∑ a π ( a ) q ( a ) . One possibility is to sample n actions from π , and then to select from the sampled actions the action with the highest q ( a ) . Instead of sampling with replacement , we can reduce the variance by sampling without replacement . Still , the sampled actions contain a limited amount of information about π . We should exploit the knowledge of π and its logits when selecting An+1 . The main idea is to sample n actions without replacement by using the Gumbel-Top-k trick , and then to use the same Gumbel g to select the action with the highest g ( a ) + logits ( a ) + σ ( q ( a ) ) . The σ can be any monotonically increasing transformation . The pseudocode for the algorithm is in Algorithm 1 . 1Inequality 6 can be strict , if we assume that an action has a positive advantage and its π ( a ) > 0 . Algorithm 1 Policy Improvement by Planning with Gumbel Require : k : number of actions . Require : n ≤ k : number of simulations . Require : logits ∈ Rk : predictor logits from a policy network π . Sample k Gumbel variables : ( g ∈ Rk ) ∼ Gumbel ( 0 ) Find n actions with the highest g ( a ) + logits ( a ) : Atopn = argtop ( g + logits , n ) Get q ( a ) for each a ∈ Atopn by visiting the actions . From the Atopn actions , find the action with the highest g ( a ) + logits ( a ) + σ ( q ( a ) ) : An+1 = argmaxa∈Atopn ( g ( a ) + logits ( a ) + σ ( q ( a ) ) ) return An+1 The algorithm produces a policy improvement , because q ( argmax a∈Atopn ( g ( a ) + logits ( a ) + σ ( q ( a ) ) ) ) ≥ q ( argmax a∈Atopn ( g ( a ) + logits ( a ) ) ) . ( 7 ) This holds for any Gumbel g , so it holds also for expectations : E [ q ( An+1 ) ] ≥ EA∼π [ q ( A ) ] . The argmaxa∈Atopn ( g ( a ) + logits ( a ) ) is equivalent to sampling from the policy network π ( see the Gumbel-Max trick ) . By using the same Gumbel vector g in the argtop and argmax , we avoid a double-counting bias . The prior knowledge contained in the logits can help on partially observable environments , or when working with approximate or stochastic Q-values . | This paper considers MCTS with learned search guidance, as in AlphaZero, MuZero, etc. The work proposes several adjustments to the prior works, particularly regarding the way in which actions are selected at the root and non-root nodes, at training and during evaluation; and also the way in which the policy is updated after search. In a simplified bandits setting, the authors point out that the previous method of performing policy updates is not guaranteed to result in a policy improvement, and they propose using a Gumbel reparameterization trick to overcome this limitation. Experiments in Go, Chess, and Atari show that the adjustments are beneficial in the regime of low simulation count. | SP:10eb3473230595eec1d5056bdc904d1852f791a0 |
FoveaTer: Foveated Transformer for Image Classification | 1 INTRODUCTION . Many mammals , including humans , have evolved a locus ( the fovea ) in the visual sensory array with increased spatial fidelity and use head and eye movements ( Land , 2012 ; Marshall et al. , 2014 ) to orient such locus to regions and objects of interest . The system design allows visual-sensing organisms to accomplish two objectives : fast target detection crucial for survival and savings in the computational cost . Computational savings are accomplished by limiting the number of units with high computational costs ( i.e. , higher spatial resolution processing ) to the fovea ’ s small spatial region . Fast target detection is achieved by distributing the remaining computational power across a much larger area in the periphery , with a lower spatial resolution with increasing distance from the fovea . Critical to the design is an efficient algorithm to guide through eye movements the highresolution fovea to regions of interest using the low-resolution periphery ( Hayhoe & Ballard , 2005 ; Strasburger et al. , 2011 ; Ludwig et al. , 2014 ) and allow optimizing the target detection and scene classification . Various computational models were proposed to model the search using foveated visual system ( Yamamoto et al. , 1996 ; Prince et al. , 2005 ) . Computer vision has evolved from using hand-crafted features to data-driven features in modern CNNs . Due to their computational limitations , the objectives of the computer vision systems align well with those of human visual system : to optimize visual detection and recognition with an efficient computational and metabolic footprint . Approaches towards saving computational power can be seen ; for example , computer vision systems evolved from using sliding windows to RCNN ’ s ( Gir- shick et al. , 2014 ) use of selective search and Faster-RCNN ’ s ( Ren et al. , 2015 ) use of Region Proposal Network ( RPN ) . A system that mimics the human vision by processing the scene with a foveated system and rational eye movements has also been proposed . This approach to exploring the scene can be seen in models like RAM ( Mnih et al. , 2014 ) for recognizing handwritten single-digits or detecting objects ( Akbas & Eckstein , 2017 ) where they sequentially process the image and decide what to process next by using the peripheral information . These foveated models approach that of full-resolution models but using a fraction of the computations . Foveated systems have also shown to result in more robustness ( Luo et al. , 2015 ; Deza et al. , 2019 ; Deza & Konkle , 2020 ; Kiritani & Ono , 2020 ; Vuyyuru et al. , 2020 ) against adversarial attacks . There has been a recent innovation in computer vision of using Transformers ( Touvron et al. , 2020 ; Dosovitskiy et al. , 2020 ) for object classification tasks which departs from the traditional overreliance on convolutions . Even after replacing the convolutions with attention modules and multilayer perceptrons , visual Transformers ( Dosovitskiy et al. , 2020 ; Touvron et al. , 2020 ) achieve close to state-of-the-art performance on the ImageNet dataset and provide better robustness against adversarial attacks ( Shao et al. , 2021 ) . Due to the flattened architecture of the transformers , it is easier for multi-resolution features to share the same feature channels . Transformers ( Vaswani et al. , 2017 ) have the added benefit of self-attention , which facilitates the interaction of various parts of the image irrespective of distance . No papers have evaluated the additional potential gains of incorporating a foveated architecture into Vision transformers for the task of ImageNet classification . Here , we evaluate the effect of a foveated architecture and sequential eye movements on a state of the art transformer model sitting on a convolutional backbone . We compare the foveated transformer relative to the full-resolution model in terms of classification accuracy and robustness to adversarial attacks . Furthermore , we extend previous papers by investigating the separate contributions to robustness related to the foveated processing ( Luo et al. , 2015 ; Deza et al. , 2019 ; Deza & Konkle , 2020 ; Kiritani & Ono , 2020 ; Vuyyuru et al. , 2020 ) and that of the process of fixating images at different locations . To this end , we perform an object classification task using multiple fixations , moving foveal attention across different parts of the image , and using only a limited portion of the image information at each fixation , thereby reducing the input to the transformer by many folds . The model decides on subsequent fixation locations using the learned self-attention weights and fixation locations of all the fixations until the current step . Finally , the model integrates information across fixations using average-pooling to make the final classification decision . 2 RELATED WORK . Transformers have achieved great success in the field of Natural Language Processing since their introduction by Vaswani et al . ( 2017 ) for machine translation . Recently , the application of Transformer models in computer vision has seen tremendous success . Vision Transformer ( ViT ) model introduced by Dosovitskiy et al . ( 2020 ) achieved remarkable performance on ImageNet ( Deng et al. , 2009 ) by using additional data from JFT 300M ( Sun et al. , 2017 ) private dataset . Subsequently , the DeiT model ( Touvron et al. , 2020 ) introduced knowledge transfer concepts in transformers to leverage the learning from existing models . Using augmentation and knowledge transfer , the DeiT model achieved close to state-of-the-art performance using training data from the ImageNet dataset alone . Sequential processing provides three main advantages in computer vision . Larochelle & Hinton ( 2010 ) proposed a model based on the Boltzmann machine that uses foveal glimpses and can make eye movements . First , it can limit the amount of information to be processed at a given instant to be constant , i.e. , the ability to keep computations constant irrespective of the input image size . Second , sequential models can help model human eye movement strategies and help transfer that information to build better computer vision systems . RAM ( Mnih et al. , 2014 ) introduced a sequential model capable of making a sequence of movements across the image to integrate information before classification . In addition , the hard-attention mechanism , implemented using reinforcement learning , was used to predict the sequence fixation locations . Ba and Minh ( Ba et al. , 2015 ) extended these ideas to recognize multiple objects in the images on a dataset constructed using MNIST . Third , se- quential processing requires fewer parameters than a model using full-resolution image input . Other models ( Xu et al. , 2015 ) have proposed image captioning models based on both hard-attention and soft-attention . Additionally , the spatial bias introduced into CNNs due to padding ( Alsallakh et al. , 2021 ) can be overcome using sequential models ( Tsotsos , 2011 ) Computational models of categorization and eye movements have been proposed for rapid categorization in terms of low-level properties such as spatial envelopes ( Oliva & Torralba , 2001 ) and texture summary statistics ( Rosenholtz et al. , 2012 ) . Saliency-based models ( Koch & Ullman , 1987 ; Itti et al. , 1998 ; Itti & Koch , 2000 ) traditionally tried to model eye movements by identifying bottom-up properties in the image that will capture attention . Torralba et.al ( Torralba et al. , 2006 ) showed how saliency could be combined with contextual information to guide eye movements . Low-resolution periphery and high-resolution central fields are integrated with saliency to predict human-like eye-movements ( Wloka et al. , 2018 ) . Akbas ( Akbas & Eckstein , 2017 ) implemented a biologically-inspired foveated architecture ( Freeman & Simoncelli , 2011 ) with a deformable parts model to build a foveated object detector which accuracy was close to a full-resolution model but using a fraction of the computations . Spatial transformer networks ( Jaderberg et al. , 2015 ) were used with foveation to improve object localization using foveated convolutions ( Harris et al. , 2019 ) and achieve better eccentricity performance ( Dabane et al. , 2021 ) . FoveaTer combines an approximation to biologically-inspired foveated architecture with a Vision Transformer Network . We apply our model to real-world images from the ImageNet dataset for the task of image classification . We pool the input to the transformer using the pooling architecture described in the following sections based on the fixation location , which reduces the number of inputs to the transformer . The subsequent fixation location is given by a confidence map , constructed using the attention weights from the last transformer block . Feature vectors corresponding to the class token from different fixations are averaged using an average-pooling layer followed by a fully connected layer , resulting in the final classification decision . A novel aspect of the proposed work is that the model also learns that all images are not equally difficult to classify adapting the exploration eye movements to different image classes and thus varying computational resources used to classify different images successfully . The model implements this idea using a confidence threshold to restrict the scene exploration to the necessary fixations to classify the image . Also novel is an evaluation the adversarial robustness of our model to understand the separate contributions of the foveated architecture and that of sequential fixations towards defense against adversarial attacks . We use the fast gradient sign method ( FGSM ( Goodfellow et al. , 2014 ) ) , which is an adversarial attack that computes the adversarial image by backpropagating the gradients once , and the projected gradient descent method ( Kurakin et al. , 2017 ; Madry et al. , 2018 ) , which iteratively computes the adversarial image , as our adversarial attacks . 3 MODEL . DeiT-Tiny ( Touvron et al. , 2020 ) : The DeiT-Tiny architecture begins with a convolution embedding layer that transforms the [ 3 , 224 , 224 ] input image into a [ 192 , 14 , 14 ] representation that is input into a series of twelve transformer blocks , each sized for a 192-dimensional embedding . Full-resolution model : Our Full-resolution model refers to a hybrid adaptation , transformer network with convolutional backbone , of the DeiT-Tiny architecture using the same number of parameters . This hybrid model replaces the single convolutional layer of the DeiT-Tiny model with a convolutional backbone . The convolutional backbone is composed of first two stages of the ResNet18 ( He et al. , 2016 ) architecture , which transforms the input from [ 3 , 224 , 224 ] to [ 128 , 28 , 28 ] , followed by three convolutional layers to acheive a final feature map of size [ 192 , 14 , 14 ] . The first convolutional layer transforms the input feature dimensionality from 128 to 192 using kernels of size 3 and stride 1 , resulting in [ 192 , 28 , 28 ] . The second convolutional layer downscales the features using a kernel of size 3 and a stride of 2 , resulting in features of size [ 192 , 14 , 14 ] . The third convolutional layer uses a kernel size of 1 and stride of 1 , retaining the size of the feature map . The number of transformer blocks is reduced from 12 to 9 to match the number of parameters with the DeiT-Tiny model . Foveated model : Extending the hybrid architecture described above , our Foveated model analyzes a pooled-version of the features from the convolution backbone at each time step and arrives at the final category decision after making the required number of time steps to satisfy the decision criterion . We show the architecture of our model in Figure 1 . The input image is first passed through the convolutional backbone described above , resulting in size [ 192 , 14 , 14 ] . Then , we perform fixation dependent average-pooling , using the ” Foveation module ” ( see next section ) , on the features from the convolutional backbone resulting in features of size [ 192 , 29 ] , where the fixation refers to one of the 196 possible spatial locations in the input feature map . Under this non-uniform average-pooling model , locations closer to the fixation location use smaller neighborhoods for pooling than locations far from the fixation location . Pooled features of size [ 192 , 29 ] along with the class token are passed through the nine transformer blocks , where class token refers to a learnable vector of size 192 values . Since the transformer blocks retain the dimensionality of the input , the output of the transformer blocks has the same size of [ 192 , 30 ] . We use the self-attention weights , corresponding to the class token feature vector from the last transformer block , in the ” Accumulator module ” ( see next section ) to predict the following fixation location . Feature vector corresponding to the class token is given as input to the average-pooling layer , averaging across such vectors received from all the fixations . Finally , the classification layer transforms the average-pooled feature vector into a logits vector . | This paper proposes a called Foveater which uses a foveated module to extract the information from the feature map with different levels of details and different locations. This proposed method has the architecture that makes sense for the image classification task. However, I found it is difficult to find significant technical novelty in this paper. | SP:4913d3bf3911917a1fd6752a3321beffc2804c7c |
FoveaTer: Foveated Transformer for Image Classification | 1 INTRODUCTION . Many mammals , including humans , have evolved a locus ( the fovea ) in the visual sensory array with increased spatial fidelity and use head and eye movements ( Land , 2012 ; Marshall et al. , 2014 ) to orient such locus to regions and objects of interest . The system design allows visual-sensing organisms to accomplish two objectives : fast target detection crucial for survival and savings in the computational cost . Computational savings are accomplished by limiting the number of units with high computational costs ( i.e. , higher spatial resolution processing ) to the fovea ’ s small spatial region . Fast target detection is achieved by distributing the remaining computational power across a much larger area in the periphery , with a lower spatial resolution with increasing distance from the fovea . Critical to the design is an efficient algorithm to guide through eye movements the highresolution fovea to regions of interest using the low-resolution periphery ( Hayhoe & Ballard , 2005 ; Strasburger et al. , 2011 ; Ludwig et al. , 2014 ) and allow optimizing the target detection and scene classification . Various computational models were proposed to model the search using foveated visual system ( Yamamoto et al. , 1996 ; Prince et al. , 2005 ) . Computer vision has evolved from using hand-crafted features to data-driven features in modern CNNs . Due to their computational limitations , the objectives of the computer vision systems align well with those of human visual system : to optimize visual detection and recognition with an efficient computational and metabolic footprint . Approaches towards saving computational power can be seen ; for example , computer vision systems evolved from using sliding windows to RCNN ’ s ( Gir- shick et al. , 2014 ) use of selective search and Faster-RCNN ’ s ( Ren et al. , 2015 ) use of Region Proposal Network ( RPN ) . A system that mimics the human vision by processing the scene with a foveated system and rational eye movements has also been proposed . This approach to exploring the scene can be seen in models like RAM ( Mnih et al. , 2014 ) for recognizing handwritten single-digits or detecting objects ( Akbas & Eckstein , 2017 ) where they sequentially process the image and decide what to process next by using the peripheral information . These foveated models approach that of full-resolution models but using a fraction of the computations . Foveated systems have also shown to result in more robustness ( Luo et al. , 2015 ; Deza et al. , 2019 ; Deza & Konkle , 2020 ; Kiritani & Ono , 2020 ; Vuyyuru et al. , 2020 ) against adversarial attacks . There has been a recent innovation in computer vision of using Transformers ( Touvron et al. , 2020 ; Dosovitskiy et al. , 2020 ) for object classification tasks which departs from the traditional overreliance on convolutions . Even after replacing the convolutions with attention modules and multilayer perceptrons , visual Transformers ( Dosovitskiy et al. , 2020 ; Touvron et al. , 2020 ) achieve close to state-of-the-art performance on the ImageNet dataset and provide better robustness against adversarial attacks ( Shao et al. , 2021 ) . Due to the flattened architecture of the transformers , it is easier for multi-resolution features to share the same feature channels . Transformers ( Vaswani et al. , 2017 ) have the added benefit of self-attention , which facilitates the interaction of various parts of the image irrespective of distance . No papers have evaluated the additional potential gains of incorporating a foveated architecture into Vision transformers for the task of ImageNet classification . Here , we evaluate the effect of a foveated architecture and sequential eye movements on a state of the art transformer model sitting on a convolutional backbone . We compare the foveated transformer relative to the full-resolution model in terms of classification accuracy and robustness to adversarial attacks . Furthermore , we extend previous papers by investigating the separate contributions to robustness related to the foveated processing ( Luo et al. , 2015 ; Deza et al. , 2019 ; Deza & Konkle , 2020 ; Kiritani & Ono , 2020 ; Vuyyuru et al. , 2020 ) and that of the process of fixating images at different locations . To this end , we perform an object classification task using multiple fixations , moving foveal attention across different parts of the image , and using only a limited portion of the image information at each fixation , thereby reducing the input to the transformer by many folds . The model decides on subsequent fixation locations using the learned self-attention weights and fixation locations of all the fixations until the current step . Finally , the model integrates information across fixations using average-pooling to make the final classification decision . 2 RELATED WORK . Transformers have achieved great success in the field of Natural Language Processing since their introduction by Vaswani et al . ( 2017 ) for machine translation . Recently , the application of Transformer models in computer vision has seen tremendous success . Vision Transformer ( ViT ) model introduced by Dosovitskiy et al . ( 2020 ) achieved remarkable performance on ImageNet ( Deng et al. , 2009 ) by using additional data from JFT 300M ( Sun et al. , 2017 ) private dataset . Subsequently , the DeiT model ( Touvron et al. , 2020 ) introduced knowledge transfer concepts in transformers to leverage the learning from existing models . Using augmentation and knowledge transfer , the DeiT model achieved close to state-of-the-art performance using training data from the ImageNet dataset alone . Sequential processing provides three main advantages in computer vision . Larochelle & Hinton ( 2010 ) proposed a model based on the Boltzmann machine that uses foveal glimpses and can make eye movements . First , it can limit the amount of information to be processed at a given instant to be constant , i.e. , the ability to keep computations constant irrespective of the input image size . Second , sequential models can help model human eye movement strategies and help transfer that information to build better computer vision systems . RAM ( Mnih et al. , 2014 ) introduced a sequential model capable of making a sequence of movements across the image to integrate information before classification . In addition , the hard-attention mechanism , implemented using reinforcement learning , was used to predict the sequence fixation locations . Ba and Minh ( Ba et al. , 2015 ) extended these ideas to recognize multiple objects in the images on a dataset constructed using MNIST . Third , se- quential processing requires fewer parameters than a model using full-resolution image input . Other models ( Xu et al. , 2015 ) have proposed image captioning models based on both hard-attention and soft-attention . Additionally , the spatial bias introduced into CNNs due to padding ( Alsallakh et al. , 2021 ) can be overcome using sequential models ( Tsotsos , 2011 ) Computational models of categorization and eye movements have been proposed for rapid categorization in terms of low-level properties such as spatial envelopes ( Oliva & Torralba , 2001 ) and texture summary statistics ( Rosenholtz et al. , 2012 ) . Saliency-based models ( Koch & Ullman , 1987 ; Itti et al. , 1998 ; Itti & Koch , 2000 ) traditionally tried to model eye movements by identifying bottom-up properties in the image that will capture attention . Torralba et.al ( Torralba et al. , 2006 ) showed how saliency could be combined with contextual information to guide eye movements . Low-resolution periphery and high-resolution central fields are integrated with saliency to predict human-like eye-movements ( Wloka et al. , 2018 ) . Akbas ( Akbas & Eckstein , 2017 ) implemented a biologically-inspired foveated architecture ( Freeman & Simoncelli , 2011 ) with a deformable parts model to build a foveated object detector which accuracy was close to a full-resolution model but using a fraction of the computations . Spatial transformer networks ( Jaderberg et al. , 2015 ) were used with foveation to improve object localization using foveated convolutions ( Harris et al. , 2019 ) and achieve better eccentricity performance ( Dabane et al. , 2021 ) . FoveaTer combines an approximation to biologically-inspired foveated architecture with a Vision Transformer Network . We apply our model to real-world images from the ImageNet dataset for the task of image classification . We pool the input to the transformer using the pooling architecture described in the following sections based on the fixation location , which reduces the number of inputs to the transformer . The subsequent fixation location is given by a confidence map , constructed using the attention weights from the last transformer block . Feature vectors corresponding to the class token from different fixations are averaged using an average-pooling layer followed by a fully connected layer , resulting in the final classification decision . A novel aspect of the proposed work is that the model also learns that all images are not equally difficult to classify adapting the exploration eye movements to different image classes and thus varying computational resources used to classify different images successfully . The model implements this idea using a confidence threshold to restrict the scene exploration to the necessary fixations to classify the image . Also novel is an evaluation the adversarial robustness of our model to understand the separate contributions of the foveated architecture and that of sequential fixations towards defense against adversarial attacks . We use the fast gradient sign method ( FGSM ( Goodfellow et al. , 2014 ) ) , which is an adversarial attack that computes the adversarial image by backpropagating the gradients once , and the projected gradient descent method ( Kurakin et al. , 2017 ; Madry et al. , 2018 ) , which iteratively computes the adversarial image , as our adversarial attacks . 3 MODEL . DeiT-Tiny ( Touvron et al. , 2020 ) : The DeiT-Tiny architecture begins with a convolution embedding layer that transforms the [ 3 , 224 , 224 ] input image into a [ 192 , 14 , 14 ] representation that is input into a series of twelve transformer blocks , each sized for a 192-dimensional embedding . Full-resolution model : Our Full-resolution model refers to a hybrid adaptation , transformer network with convolutional backbone , of the DeiT-Tiny architecture using the same number of parameters . This hybrid model replaces the single convolutional layer of the DeiT-Tiny model with a convolutional backbone . The convolutional backbone is composed of first two stages of the ResNet18 ( He et al. , 2016 ) architecture , which transforms the input from [ 3 , 224 , 224 ] to [ 128 , 28 , 28 ] , followed by three convolutional layers to acheive a final feature map of size [ 192 , 14 , 14 ] . The first convolutional layer transforms the input feature dimensionality from 128 to 192 using kernels of size 3 and stride 1 , resulting in [ 192 , 28 , 28 ] . The second convolutional layer downscales the features using a kernel of size 3 and a stride of 2 , resulting in features of size [ 192 , 14 , 14 ] . The third convolutional layer uses a kernel size of 1 and stride of 1 , retaining the size of the feature map . The number of transformer blocks is reduced from 12 to 9 to match the number of parameters with the DeiT-Tiny model . Foveated model : Extending the hybrid architecture described above , our Foveated model analyzes a pooled-version of the features from the convolution backbone at each time step and arrives at the final category decision after making the required number of time steps to satisfy the decision criterion . We show the architecture of our model in Figure 1 . The input image is first passed through the convolutional backbone described above , resulting in size [ 192 , 14 , 14 ] . Then , we perform fixation dependent average-pooling , using the ” Foveation module ” ( see next section ) , on the features from the convolutional backbone resulting in features of size [ 192 , 29 ] , where the fixation refers to one of the 196 possible spatial locations in the input feature map . Under this non-uniform average-pooling model , locations closer to the fixation location use smaller neighborhoods for pooling than locations far from the fixation location . Pooled features of size [ 192 , 29 ] along with the class token are passed through the nine transformer blocks , where class token refers to a learnable vector of size 192 values . Since the transformer blocks retain the dimensionality of the input , the output of the transformer blocks has the same size of [ 192 , 30 ] . We use the self-attention weights , corresponding to the class token feature vector from the last transformer block , in the ” Accumulator module ” ( see next section ) to predict the following fixation location . Feature vector corresponding to the class token is given as input to the average-pooling layer , averaging across such vectors received from all the fixations . Finally , the classification layer transforms the average-pooled feature vector into a logits vector . | The paper shows a method of Foveated Transformer motivated by a human foveal vision where the spatial resolution varies depending on the focused point. Namely, the method mainly consists of the spatial attention model that controls the gaze point and spatial-varying convolutional filter (dense pooling in center and sparse pooling in peripheral). The former simulates human gaze control and the latter mimics the human retina. The experimental validation is performed by the object recognition task (ImageNet) and adversarial attack tasks and performed better than the Deit-Tiny model that uses a similar number of parameters. | SP:4913d3bf3911917a1fd6752a3321beffc2804c7c |
CareGraph: A Graph-based Recommender System for Diabetes Self-Care | 1 INTRODUCTION . The recent global pandemic has brought with it a permanent shift away from traditional in-person health consultations , towards large digital telehealth platforms that support remote consults . This research is performed in the context of one such platform that provides a mobile application to help people manage chronic diseases such as diabetes , hypertension and obesity . As solution designers , some of the key problems we face is to get our users ’ attention , foster awareness and encourage them to take actions that help manage their chronic health conditions . Sending notifications that nudge users is one efficient way to encourage engagement . However , user preferences for engaging with these nudges can vary greatly , different users require different persuasion techniques . Even within a homogeneous messaging context , different tones in messages can appeal to different users . Recommendation systems can be applied to model preference patterns and predict effective personalized nudge notifications . Collaborative recommender systems are well established tools for predicting user preference and have been shown to perform very well , provided there is sufficient information available to model the users preferences , as highlighted by seminal papers in the field such as Resnick et al . ( 1994 ) . For new or esoteric items , or users , there is frequently a lack of sufficient information to make a good prediction . These conditions , known in the literature as ’ cold start ’ are the focus of our experiments in this paper . Specifically in the context of recommending health nudges , which are created by experts who inherently follow guidelines set by associations such as American Diabetes Association ame ( 2021 ) . Our primary research question asks whether a knowledge graph can be applied to mitigate cold start problems , in the specific task of recommending a finite set of highly structured health nudge messages . 1.1 KNOWLEDGE GRAPH-BASED RECOMMENDATION . Online content and services have undergone a huge volume of growth in recent years . Accordingly , recommender systems are increasingly relied upon to help people get to the right information at the right time and also in the right way [ Ricci et al . ( 2011 ) ] . They help customers shorten their times exploring products or services in various applications such as news portals [ Wang et al . ( 2018b ) ] , E-commerce [ Zhang & Jiao ( 2007 ) , Hwangbo et al . ( 2018 ) ] , accommodations [ Haldar et al . ( 2019 ) ] , or music recommendation [ Van Den Oord et al . ( 2013 ) ] . However , our problems are different from these recommendation applications . First , in other applications like E-commerce , users can see a list of products and select their interested items . They can visit websites to search for items of interests anytime they like . In our nudge notifications , our users can just see one nudge at a time , and hence we can not flush a lot of nudges as it will disturb our users and can negatively impact their level of engagement . Hence , recommending the nudges that match user preferences while also ensuring that there is a diversity in the actions we are recommending , and achieving good performance in as few notifications as possible is a crucial requirement in this setting . Also , we frequently need to support the creation of new nudges . Within this setting , using embeddings from supervised learning methodologies may not be able to solve our problems . To learn efficient embeddings from those approaches , we require a lot of data for each item . Organizing and aggregating nudge attributes such as themes , tones or objectives can help the cold-start problem as we can transfer the knowledge of user preferences on nudge attributes to new nudges and help us understand and optimize to user behaviors quickly . For example , if a user has positive responses to encouragement-tone nudges , we can send more encouragement-tone based messages to this user . Knowledge graphs ( KG ) have been applied in various tasks such as search engines [ Yang et al . ( 2017 ) ] , text classification [ Hu et al . ( 2021 ) ] and word embedding [ Celikyilmaz et al . ( 2015 ) ] . They have also been previously introduced to recommendation systems to improve the precision of recommended items and the explainability [ Wang et al . ( 2019 ) ] . A knowledge graph is a directed heterogeneous graph representing real-world objects and relationship among these objects . Nodes in a KG represent entities , which can be items or the attributes that describe the item . The edges represent relations between each entity . Knowledge graphs give recommendation systems the benefit of enriching the semantic relationships among items [ Wang et al . ( 2019 ) ] . When we know users ’ interests in a KG graph , we can extend the diversity of recommended items from the node connections present in the graph . Our problem domain is selecting nudge notifications for our mobile applications that are personalized to individual users preferences . We have different types of nudge notifications such as alerting users in regards to their health condition , reminding users to follow their health routine , providing some useful education related to their health , or introducing new services and events to users . We want to personalize nudges to each user to increase their engagement and reduce over-sending nudges that might disturb the users . The contributions of this work are summarized as follows : • We propose a new method using a neural network that combines user attributes with a knowledge graph to overcome the cold-start problem . • We conduct experiments demonstrating the effectiveness of our new approach , particularly under cold-start conditions . 1.2 DIGITAL HEALTH SYSTEM FOR DIABETES SELF-CARE . This work is based on a system that leverages a Blood-Glucose ( BG ) monitoring device , and a Mobile App that is used to provide useful information and suggest actions to users that are relevant and helpful for managing Diabetes . As a part of this system , users are provided with short text messages , about 180 characters long . The text is designed to nudge a user to specific actions that are relevant to managing the condition , such as , healthy recipes ideas , articles that explain and motivate users to add exercise to their routine , reminders to check and monitor BG levels , talking to expert coaches etc . The short text is accompanied with two buttons , one of which is used to guide the target user to perform a specific action ( e.g . read content , schedule a call with a coach ) , and the other can be used to dismiss the recommendation . In this work we call these recommendations on the Mobile-App , Mobile Nudges ( or Nudges for short ) . 2 RELATED WORK . 2.1 RECOMMENDER SYSTEMS IN DIGITAL HEALTH . Over the past decade , a rapidly increasing volume of digital information to support clinical decision making has become available to be leveraged by healthcare recommender systems . Automated health recommender systems have been deployed in various domains . Some applications aim for improving lifestyle to be healthier through diet [ Elsweiler et al . ( 2017 ) , Achananuparp & Weber ( 2016 ) ] or physical activity recommendations [ Dharia et al . ( 2016 ) ] . One such system , known as Pro-fit [ Dharia et al . ( 2016 ) ] applies a hybrid approach to personalize workout sessions based on a user ’ s contextual data and calendar events . Achananuparp & Weber ( 2016 ) proposed a healthier food substitute suggestion system by introducing a food-context matrix and applying Singular Value Decomposition to get the low-dimensional representation of each food item . The similarity between two food items is measured by cosine similarity . Narducci et al . ( 2015 ) introduced an HealthNet to personalize doctors and hospitals to users , given the user profile and the health data shared by the community . The iDoctor system in Zhang et al . ( 2017b ) provides doctor recommendations by using sentiment analysis from their rating and reviews and Latent Dirichlet Allocation for user preference and doctor features . The hybrid matrix factorization is applied to predict the doctor rating . Recommendation systems have also been used for optimizing treatment plan such as drug recommendation [ Zhang et al . ( 2017a ) , Stark et al . ( 2017 ) ] . Zhang et al . ( 2017a ) applied the concept of neighborhood-based method in a drug-drug interaction prediction to help reduce unexpected effects from using multiple drugs . A drug recommendation system for Migraine diseases proposed by Stark et al . ( 2017 ) using a graph database and collaborative filtering approach . 2.2 KNOWLEDGE GRAPH EMBEDDING . Knowledge graph embedding ( KGE ) is a process to transform a knowledge graph into low-dimensional continuous vector space which still preserves the network structure information . The knowledge graph can be represented as a group of triplets , each of which contains two nodes ( items or attributes ) and the relationship between these nodes . With these triples KGE will project all entities and relations into a low dimensional vector space that preserves their graph structure in these vector representations . There are several approaches for building these representations in the KGE setting . Translation distance models such as TransE [ Bordes et al . ( 2013 ) ] , TransH [ Wang et al . ( 2014 ) ] , TransR [ Lin et al . ( 2015 ) ] and semantic matching models such as DistMult [ Yang et al . ( 2014 ) ] are some of the popular methods . 2.3 KNOWLEDGE GRAPH EMBEDDING IN RECOMMENDATION SYSTEMS . Knowledge graphs have been applied to many recommendation applications . Wang et al . ( 2018a ) proposed the RippleNet , an end-to-end recommendation framework that incorporates knowledge graph into recommendation systems . RippleNet extends user interest of items through links in the knowledge graph which help increasing the diversity . Wang et al . ( 2018b ) introduced Deep Knowledge-aware Network ( DKN ) for news recommendation . The key component of DKN is knowledge-aware convolutional neural network ( KCNN ) for incorporating word-level and knowledge-level representations . Zhang et al . ( 2018 ) extends a collaborative filtering framework to learn over the knowledge graph embedding . A user-item graph is used , where each connection between nodes depicts how a user interacted with an item , for example ( buy , also bought , also view ) Another method for using a knowledge graph in a recommenation system is path-based approach , which uses the connection patterns in the knowledge graph to generate the recommendation . Hete-CF [ Luo et al . ( 2014 ) ] , a collaborative filtering recommendation method on Heterogeneous Social Network , which combines different types of meta-paths ( user-user , user-item and item-item ) and calculates the similarity . Yu et al . ( 2014 ) introduced HeteRec which represents the connectivity between users and items with meta path-based latent features . HeteRec recommendation models are further defined into two levels : global ( HeteRec-g ) and personalized ( HeteRec-p ) levels . In the HeteRec-p , users are clustered based on their interests and preferences into subgroups and then learn recommendation models for each user subgroup . 3 PROBLEM FORMULATION . Our nudge recommendation can be seen as a binary classification problem where we try to predict the probability that a user will have a positive engagement with a nudge ( i.e . accepted the action or suggestion provided by the nudge ) . Let U = { u1 , u2 , ... } and V = { v1 , v2 , ... } denote the sets of users and nudges respectively . A user-item interactive matrix is defined as Y = { yu , v|u ∈ U , v ∈ V } where yu , v = { 1 if the user u positively engaged with the nudge v 0 otherwise ( 1 ) We want our model to learn and predict a probability of ui will click on the nudge vj or ŷui , vj = f ( ui , vj ) . | This paper introduces a knowledge-graph enhanced recommendation method for healthcare platforms. The authors use user profile information to select entity neighbors for user modeling. Experiments on a nudge CTR prediction dataset show some improvements brought by the proposed method. | SP:140a3d3a8f884cfb4ef8e6542812947017a0888c |
CareGraph: A Graph-based Recommender System for Diabetes Self-Care | 1 INTRODUCTION . The recent global pandemic has brought with it a permanent shift away from traditional in-person health consultations , towards large digital telehealth platforms that support remote consults . This research is performed in the context of one such platform that provides a mobile application to help people manage chronic diseases such as diabetes , hypertension and obesity . As solution designers , some of the key problems we face is to get our users ’ attention , foster awareness and encourage them to take actions that help manage their chronic health conditions . Sending notifications that nudge users is one efficient way to encourage engagement . However , user preferences for engaging with these nudges can vary greatly , different users require different persuasion techniques . Even within a homogeneous messaging context , different tones in messages can appeal to different users . Recommendation systems can be applied to model preference patterns and predict effective personalized nudge notifications . Collaborative recommender systems are well established tools for predicting user preference and have been shown to perform very well , provided there is sufficient information available to model the users preferences , as highlighted by seminal papers in the field such as Resnick et al . ( 1994 ) . For new or esoteric items , or users , there is frequently a lack of sufficient information to make a good prediction . These conditions , known in the literature as ’ cold start ’ are the focus of our experiments in this paper . Specifically in the context of recommending health nudges , which are created by experts who inherently follow guidelines set by associations such as American Diabetes Association ame ( 2021 ) . Our primary research question asks whether a knowledge graph can be applied to mitigate cold start problems , in the specific task of recommending a finite set of highly structured health nudge messages . 1.1 KNOWLEDGE GRAPH-BASED RECOMMENDATION . Online content and services have undergone a huge volume of growth in recent years . Accordingly , recommender systems are increasingly relied upon to help people get to the right information at the right time and also in the right way [ Ricci et al . ( 2011 ) ] . They help customers shorten their times exploring products or services in various applications such as news portals [ Wang et al . ( 2018b ) ] , E-commerce [ Zhang & Jiao ( 2007 ) , Hwangbo et al . ( 2018 ) ] , accommodations [ Haldar et al . ( 2019 ) ] , or music recommendation [ Van Den Oord et al . ( 2013 ) ] . However , our problems are different from these recommendation applications . First , in other applications like E-commerce , users can see a list of products and select their interested items . They can visit websites to search for items of interests anytime they like . In our nudge notifications , our users can just see one nudge at a time , and hence we can not flush a lot of nudges as it will disturb our users and can negatively impact their level of engagement . Hence , recommending the nudges that match user preferences while also ensuring that there is a diversity in the actions we are recommending , and achieving good performance in as few notifications as possible is a crucial requirement in this setting . Also , we frequently need to support the creation of new nudges . Within this setting , using embeddings from supervised learning methodologies may not be able to solve our problems . To learn efficient embeddings from those approaches , we require a lot of data for each item . Organizing and aggregating nudge attributes such as themes , tones or objectives can help the cold-start problem as we can transfer the knowledge of user preferences on nudge attributes to new nudges and help us understand and optimize to user behaviors quickly . For example , if a user has positive responses to encouragement-tone nudges , we can send more encouragement-tone based messages to this user . Knowledge graphs ( KG ) have been applied in various tasks such as search engines [ Yang et al . ( 2017 ) ] , text classification [ Hu et al . ( 2021 ) ] and word embedding [ Celikyilmaz et al . ( 2015 ) ] . They have also been previously introduced to recommendation systems to improve the precision of recommended items and the explainability [ Wang et al . ( 2019 ) ] . A knowledge graph is a directed heterogeneous graph representing real-world objects and relationship among these objects . Nodes in a KG represent entities , which can be items or the attributes that describe the item . The edges represent relations between each entity . Knowledge graphs give recommendation systems the benefit of enriching the semantic relationships among items [ Wang et al . ( 2019 ) ] . When we know users ’ interests in a KG graph , we can extend the diversity of recommended items from the node connections present in the graph . Our problem domain is selecting nudge notifications for our mobile applications that are personalized to individual users preferences . We have different types of nudge notifications such as alerting users in regards to their health condition , reminding users to follow their health routine , providing some useful education related to their health , or introducing new services and events to users . We want to personalize nudges to each user to increase their engagement and reduce over-sending nudges that might disturb the users . The contributions of this work are summarized as follows : • We propose a new method using a neural network that combines user attributes with a knowledge graph to overcome the cold-start problem . • We conduct experiments demonstrating the effectiveness of our new approach , particularly under cold-start conditions . 1.2 DIGITAL HEALTH SYSTEM FOR DIABETES SELF-CARE . This work is based on a system that leverages a Blood-Glucose ( BG ) monitoring device , and a Mobile App that is used to provide useful information and suggest actions to users that are relevant and helpful for managing Diabetes . As a part of this system , users are provided with short text messages , about 180 characters long . The text is designed to nudge a user to specific actions that are relevant to managing the condition , such as , healthy recipes ideas , articles that explain and motivate users to add exercise to their routine , reminders to check and monitor BG levels , talking to expert coaches etc . The short text is accompanied with two buttons , one of which is used to guide the target user to perform a specific action ( e.g . read content , schedule a call with a coach ) , and the other can be used to dismiss the recommendation . In this work we call these recommendations on the Mobile-App , Mobile Nudges ( or Nudges for short ) . 2 RELATED WORK . 2.1 RECOMMENDER SYSTEMS IN DIGITAL HEALTH . Over the past decade , a rapidly increasing volume of digital information to support clinical decision making has become available to be leveraged by healthcare recommender systems . Automated health recommender systems have been deployed in various domains . Some applications aim for improving lifestyle to be healthier through diet [ Elsweiler et al . ( 2017 ) , Achananuparp & Weber ( 2016 ) ] or physical activity recommendations [ Dharia et al . ( 2016 ) ] . One such system , known as Pro-fit [ Dharia et al . ( 2016 ) ] applies a hybrid approach to personalize workout sessions based on a user ’ s contextual data and calendar events . Achananuparp & Weber ( 2016 ) proposed a healthier food substitute suggestion system by introducing a food-context matrix and applying Singular Value Decomposition to get the low-dimensional representation of each food item . The similarity between two food items is measured by cosine similarity . Narducci et al . ( 2015 ) introduced an HealthNet to personalize doctors and hospitals to users , given the user profile and the health data shared by the community . The iDoctor system in Zhang et al . ( 2017b ) provides doctor recommendations by using sentiment analysis from their rating and reviews and Latent Dirichlet Allocation for user preference and doctor features . The hybrid matrix factorization is applied to predict the doctor rating . Recommendation systems have also been used for optimizing treatment plan such as drug recommendation [ Zhang et al . ( 2017a ) , Stark et al . ( 2017 ) ] . Zhang et al . ( 2017a ) applied the concept of neighborhood-based method in a drug-drug interaction prediction to help reduce unexpected effects from using multiple drugs . A drug recommendation system for Migraine diseases proposed by Stark et al . ( 2017 ) using a graph database and collaborative filtering approach . 2.2 KNOWLEDGE GRAPH EMBEDDING . Knowledge graph embedding ( KGE ) is a process to transform a knowledge graph into low-dimensional continuous vector space which still preserves the network structure information . The knowledge graph can be represented as a group of triplets , each of which contains two nodes ( items or attributes ) and the relationship between these nodes . With these triples KGE will project all entities and relations into a low dimensional vector space that preserves their graph structure in these vector representations . There are several approaches for building these representations in the KGE setting . Translation distance models such as TransE [ Bordes et al . ( 2013 ) ] , TransH [ Wang et al . ( 2014 ) ] , TransR [ Lin et al . ( 2015 ) ] and semantic matching models such as DistMult [ Yang et al . ( 2014 ) ] are some of the popular methods . 2.3 KNOWLEDGE GRAPH EMBEDDING IN RECOMMENDATION SYSTEMS . Knowledge graphs have been applied to many recommendation applications . Wang et al . ( 2018a ) proposed the RippleNet , an end-to-end recommendation framework that incorporates knowledge graph into recommendation systems . RippleNet extends user interest of items through links in the knowledge graph which help increasing the diversity . Wang et al . ( 2018b ) introduced Deep Knowledge-aware Network ( DKN ) for news recommendation . The key component of DKN is knowledge-aware convolutional neural network ( KCNN ) for incorporating word-level and knowledge-level representations . Zhang et al . ( 2018 ) extends a collaborative filtering framework to learn over the knowledge graph embedding . A user-item graph is used , where each connection between nodes depicts how a user interacted with an item , for example ( buy , also bought , also view ) Another method for using a knowledge graph in a recommenation system is path-based approach , which uses the connection patterns in the knowledge graph to generate the recommendation . Hete-CF [ Luo et al . ( 2014 ) ] , a collaborative filtering recommendation method on Heterogeneous Social Network , which combines different types of meta-paths ( user-user , user-item and item-item ) and calculates the similarity . Yu et al . ( 2014 ) introduced HeteRec which represents the connectivity between users and items with meta path-based latent features . HeteRec recommendation models are further defined into two levels : global ( HeteRec-g ) and personalized ( HeteRec-p ) levels . In the HeteRec-p , users are clustered based on their interests and preferences into subgroups and then learn recommendation models for each user subgroup . 3 PROBLEM FORMULATION . Our nudge recommendation can be seen as a binary classification problem where we try to predict the probability that a user will have a positive engagement with a nudge ( i.e . accepted the action or suggestion provided by the nudge ) . Let U = { u1 , u2 , ... } and V = { v1 , v2 , ... } denote the sets of users and nudges respectively . A user-item interactive matrix is defined as Y = { yu , v|u ∈ U , v ∈ V } where yu , v = { 1 if the user u positively engaged with the nudge v 0 otherwise ( 1 ) We want our model to learn and predict a probability of ui will click on the nudge vj or ŷui , vj = f ( ui , vj ) . | This paper proposes a graph-based recommender system for diabetes self-care management. The proposed method bases on the knowledge graph embeddings techniques. The proposed method shows better performance compared with two baselines on metric AUC. | SP:140a3d3a8f884cfb4ef8e6542812947017a0888c |
How to deal with missing data in supervised deep learning? | 1 INTRODUCTION Missing data affect data analysis across a wide range of domains and the sources of missing values span an equally wide range . Recently , deep latent variable models ( DLVMs , Kingma & Welling , 2014 ; Rezende et al. , 2014 ) have been applied to missing data problems in an unsupervised setting ( e.g . Rezende et al. , 2014 ; Nazabal et al. , 2020 ; Ma et al. , 2018 ; 2019 ; Ivanov et al. , 2019 ; Mattei & Frellsen , 2018 ; 2019 ; Yoon et al. , 2018 ; Li et al. , 2019 ; Ipsen et al. , 2021 ; Ghalebikesabi et al. , 2021 ) , while the supervised setting has not seen the same attention . The progress in the unsupervised setting is focused on inference and imputation and can therefore be useful as an imputation step before learning a discriminative model . Traditionally the focus has also been on inference and imputation either as a goal in itself or before supervised learning on the ( possibly multiple ) imputations and observed data ( Rubin , 1976 ; 1996 ; Little & Rubin , 2019 ) . Supervised learning in the presence of missing values has different goals and pose different challenges than inference and imputation ( Josse et al. , 2019 ; Le Morvan et al. , 2020a ; 2021 ) . Here the overall aim is to minimize an expected prediction error . However , optimal single imputation does not necessarily lead to optimal prediction in terms of minimizing a prediction error ( Josse et al. , 2019 ) . One challenge is that predictions based on inputs with missing values can be ambiguous , that is , the conditional distribution of the missing values given the observed may be multimodal and the optimal prediction may change with the mode . With single imputation , the conditional distribution over the missing data is discarded , and the optimal single imputation can no longer reflect this ambiguity , see figure 2 . Even the optimal single imputation leads to biased parameter estimates compared to the fully observed case ( Bertsimas et al. , 2021 ) , and may in some cases lie outside the distribution of the data . Another challenge is that the number of missing value patterns grows combinatorially with the number of features p , requiring 2p submodels to fit the Bayes predictor in the linear case ( Le Morvan et al. , 2020a ) . In multiple imputation ( Rubin , 1996 ) , several imputations are drawn from the posterior predictive distribution of the missing values , reflecting the full conditional distribution of the missing values and thus the uncertainty about what is missing . This allows for uncertainty estimates in downstream tasks such as prediction . This in turn requires fitting as many discriminative models as the number of imputations . 1.1 CONTRIBUTIONS . In order to address supervised deep learning with missing values we develop the supervised missing data importance-weighted autoencoder ( supMIWAE ) bound , based on the approximate maximum likehood techniques used by Burda et al . ( 2016 ) and Mattei & Frellsen ( 2019 ) . This is a scalable approach to marginalizing over missing values in a joint model of covariates and outcomes pφ , θ ( y , x obs , xmiss ) = pφ ( y|xobs , xmiss ) pθ ( xobs , xmiss ) . ( 1 ) The covariate model pθ ( xobs , xmiss ) = ∫ p ( xobs , xmiss|z ) p ( z ) dz is a DLVM in this work , but can be any probabilistic model imposing a joint density over covariates . The outcome model pφ ( y|xobs , xmiss ) is any neural discriminative architecture that would have been used in the complete-data case , parameterizing a density over the outcomes . The graphical model is shown in figure 1 along with its computational structure . Once the joint model has been trained using the supMIWAE bound , the conditional distribution of the outcome given the observed parts of the input can be found as pφ ( y|xobs ) = ∫ pφ ( y|xobs , xmiss ) pθ ( xmiss|xobs ) dxmiss . ( 2 ) This is approximated using importance sampling techniques , averaging predictions over multiple importance samples ( akin to multiple imputations , see appendix C for a deeper discussion ) from the generative model . The model can be trained end-to-end or a pretrained generative model can be coupled with a discriminative model . Having a joint model allows for supervised imputations , where available labels can help guide the imputations from the generative model by adjusting the importance weights accordingly . Joint models over covariates and labels have previously been used to marginalize over missing values in the covariates , using Gaussian mixture models either as the model over the covariates or directly as the joint model over covariates and outcomes ( Ghahramani & Jordan , 1995 ; Ghahramani & Hinton , 1996 ; Ahmad & Tresp , 1993 ; Tresp et al. , 1994 ; 1995 ; Śmieja et al. , 2018 ) . Our approach shows how to use more flexible generative models , such as DLVMs , provides an efficient optimization procedure and allows for keeping any deep discriminative architecture unchanged . 2 BACKGROUND AND NOTATION . We define the random variable x = ( x1 , . . . , xp ) ∈ X which takes values in a p-dimensional feature space X = X1 × · · · × Xp . There is a corresponding ( possibly vector valued ) response variable y ∈ Y . A missing process obscures parts of x resulting in the mask random variable s ∈ { 0 , 1 } p where sj = { 1 if xj observed , 0 if xj missing . ( 3 ) We let obs ( s ) denote the indices of the non-zero entries of s and miss ( s ) denote the indices of the zero-entries of s , such that xobs ( s ) is the set of observed elements of x , and xmiss ( s ) is the set of missing elements of x . For simplicity we will omit the s and write xobs , xmiss respectively , whenever the context is clear . Following ( Yoon et al. , 2018 ; Le Morvan et al. , 2020b ; Ghalebikesabi et al. , 2021 ) we define the incomplete random variable x̃ = ( x̃1 , . . . , x̃p ) ∈ X̃ which takes values in X̃ = ( X1∪ { na } ) ×· · ·× ( Xp ∪ { na } ) where missing values are represented by the symbol na , such that na · xj = na and na · 0 = 0 . We typically only have access to x̃ , where x̃ = x s + na ( 1− s ) ( 4 ) and is the Hadamard product . Finally , we are given n i.i.d . copies of the random variables x̃ and y which we collect in a dataset D = { x̃i , yi } ni=1 , or alternatively D = { xobsi , yi } n i=1 . We make the additional assumption that the data are missing at random ( MAR , see e.g . Seaman et al. , 2013 ; Little & Rubin , 2019 ) . Specifically , this means that we assume that s and xmiss are independent given xobs . This assumption allows to avoid to explicitly model the missing mechanism . Our approach could be extended beyond MAR by using a generative model fit for this purpose , like the not-MIWAE of Ipsen et al . ( 2021 ) , or the deep pattern-set mixture of Ghalebikesabi et al . ( 2021 ) . 2.1 CHALLENGES WHEN TRAINING DISRIMINATIVE MODELS WITH MISSING DATA . Our predictive model will be defined through a mapping fφ : X → H used to parameterize a conditional distribution pφ ( y|x ) = Ψ ( y|fφ ( x ) ) . ( 5 ) Here ( Ψ ( ·|η ) ) η∈H is a parametric family of distributions over the outcome space Y , such as a Gaussian distribution in regression or a categorical distribution in classification . Under the maximumlikelihood framework ( or equivalently the logarithmic scoring rule ) , an optimal mapping f∗φ within a class F = ( fφ ) φ∈Φ parameterized by φ ∈ Φ is defined as f∗φ ∈ arg max fφ∈F Ep∗ ( x , y ) [ log Ψ ( y|fφ ( x ) ) ] , ( 6 ) where p∗ ( x , y ) is the true data generating distribution . When the data are complete , this is typically approximated by maximizing the log likelihood of the parameters φ given the data Dtrain , ` ( φ ) = ∑ ( x , y ) ∈Dtrain log pφ ( y|x ) . ( 7 ) When the covariates have missing values the log-likelihood can not be maximized directly as the likelihood is not defined since pφ ( y|x ) depends on the full input vector . The simplest approach is instead to learn separate mappings for each missing pattern . This reduces the amount of data available for the training of each mapping and does not scale well with the dimensionality of the input as in general 2p networks need to be trained . An often used approach is instead to impute the missing values using a single imputation ( for instance , using Epθ ( xmiss|xobs ) [ xmiss ] or an approximation of it ) , and then map the concatenation of observed and imputed values to approximate the conditional distribution pφ ( y|xobs ) ≈ pφ ( y|x = ( xobs , Epθ ( xmiss|xobs ) [ x miss ] ) ) = Ψ ( y|fφ ( xobs , Epθ ( xmiss|xobs ) [ x miss ] ) ) . ( 8 ) Instead of Epθ ( xmiss|xobs ) [ xmiss ] , which is the optimal imputation under the mean squared error , one could use any kind of imputation , for instance using a constant , or the mean . This general approach is adequately called impute-then-regress by Bertsimas et al . ( 2021 ) and Le Morvan et al . ( 2021 ) . While it will be Bayes-consistent given a powerful enough classifier ( Le Morvan et al. , 2021 ) , it leads to biased parameter estimates compared to having complete data ( Bertsimas et al. , 2021 ) . This is illustrated in figure 2 where the optimal ( under the mean squared error ) single imputations are placed entirely within one of the classes , requiring a biased decision surface to properly reflect the class label probabilities for the records with missing values . From a probabilistic perspective , computing pφ ( y|xobs ) requires marginalizing over the missing features as in equation ( 1 ) , that is pφ ( y|xobs ) = Epθ ( xmiss|xobs ) [ pφ ( y|x obs , xmiss ) ] . ( 9 ) Notice the difference to single imputation since , in general Epθ ( xmiss|xobs ) [ pφ ( y|x obs , xmiss ) ] 6= pφ ( y|x = ( xobs , Epθ ( xmiss|xobs ) [ x miss ] ) ) , ( 10 ) except in some pathological cases ( e.g . when pφ is linear or pθ is a Dirac distribution ) . Therefore , if the discriminative model is nonlinear and the generative model is complex enough , then using single imputation will give a very different result than marginalising over the missing features . This is exemplified in figure 2 , where a classifier trained using the optimal imputation finds a solution that is very different to what would be found by one trained without missing data . Our technique , marginalizing the missing values , allows to recover the same decision surface as a classifier trained without missing data . | The paper handles the issue of missing values in supervised deep learning settings. The fig.1 describes their method aptly. Their method (supMIWAE) is a combination of a VAE with a neural network classifier. Given the task to predict p(Y|x_{obs}, x_{miss}), the authors view it as a joint model of covariates and outcomes. Their model takes in x_{obs} and first fits a distribution to get x_{miss} and then models them as a joint distribution to predict outcome Y|(x_{obs}, x_{miss}). They use importance sampling technique to get multiple samples from the generative model. | SP:cd8eaee441e33312233c6d6d41142be9e6b59b9d |
How to deal with missing data in supervised deep learning? | 1 INTRODUCTION Missing data affect data analysis across a wide range of domains and the sources of missing values span an equally wide range . Recently , deep latent variable models ( DLVMs , Kingma & Welling , 2014 ; Rezende et al. , 2014 ) have been applied to missing data problems in an unsupervised setting ( e.g . Rezende et al. , 2014 ; Nazabal et al. , 2020 ; Ma et al. , 2018 ; 2019 ; Ivanov et al. , 2019 ; Mattei & Frellsen , 2018 ; 2019 ; Yoon et al. , 2018 ; Li et al. , 2019 ; Ipsen et al. , 2021 ; Ghalebikesabi et al. , 2021 ) , while the supervised setting has not seen the same attention . The progress in the unsupervised setting is focused on inference and imputation and can therefore be useful as an imputation step before learning a discriminative model . Traditionally the focus has also been on inference and imputation either as a goal in itself or before supervised learning on the ( possibly multiple ) imputations and observed data ( Rubin , 1976 ; 1996 ; Little & Rubin , 2019 ) . Supervised learning in the presence of missing values has different goals and pose different challenges than inference and imputation ( Josse et al. , 2019 ; Le Morvan et al. , 2020a ; 2021 ) . Here the overall aim is to minimize an expected prediction error . However , optimal single imputation does not necessarily lead to optimal prediction in terms of minimizing a prediction error ( Josse et al. , 2019 ) . One challenge is that predictions based on inputs with missing values can be ambiguous , that is , the conditional distribution of the missing values given the observed may be multimodal and the optimal prediction may change with the mode . With single imputation , the conditional distribution over the missing data is discarded , and the optimal single imputation can no longer reflect this ambiguity , see figure 2 . Even the optimal single imputation leads to biased parameter estimates compared to the fully observed case ( Bertsimas et al. , 2021 ) , and may in some cases lie outside the distribution of the data . Another challenge is that the number of missing value patterns grows combinatorially with the number of features p , requiring 2p submodels to fit the Bayes predictor in the linear case ( Le Morvan et al. , 2020a ) . In multiple imputation ( Rubin , 1996 ) , several imputations are drawn from the posterior predictive distribution of the missing values , reflecting the full conditional distribution of the missing values and thus the uncertainty about what is missing . This allows for uncertainty estimates in downstream tasks such as prediction . This in turn requires fitting as many discriminative models as the number of imputations . 1.1 CONTRIBUTIONS . In order to address supervised deep learning with missing values we develop the supervised missing data importance-weighted autoencoder ( supMIWAE ) bound , based on the approximate maximum likehood techniques used by Burda et al . ( 2016 ) and Mattei & Frellsen ( 2019 ) . This is a scalable approach to marginalizing over missing values in a joint model of covariates and outcomes pφ , θ ( y , x obs , xmiss ) = pφ ( y|xobs , xmiss ) pθ ( xobs , xmiss ) . ( 1 ) The covariate model pθ ( xobs , xmiss ) = ∫ p ( xobs , xmiss|z ) p ( z ) dz is a DLVM in this work , but can be any probabilistic model imposing a joint density over covariates . The outcome model pφ ( y|xobs , xmiss ) is any neural discriminative architecture that would have been used in the complete-data case , parameterizing a density over the outcomes . The graphical model is shown in figure 1 along with its computational structure . Once the joint model has been trained using the supMIWAE bound , the conditional distribution of the outcome given the observed parts of the input can be found as pφ ( y|xobs ) = ∫ pφ ( y|xobs , xmiss ) pθ ( xmiss|xobs ) dxmiss . ( 2 ) This is approximated using importance sampling techniques , averaging predictions over multiple importance samples ( akin to multiple imputations , see appendix C for a deeper discussion ) from the generative model . The model can be trained end-to-end or a pretrained generative model can be coupled with a discriminative model . Having a joint model allows for supervised imputations , where available labels can help guide the imputations from the generative model by adjusting the importance weights accordingly . Joint models over covariates and labels have previously been used to marginalize over missing values in the covariates , using Gaussian mixture models either as the model over the covariates or directly as the joint model over covariates and outcomes ( Ghahramani & Jordan , 1995 ; Ghahramani & Hinton , 1996 ; Ahmad & Tresp , 1993 ; Tresp et al. , 1994 ; 1995 ; Śmieja et al. , 2018 ) . Our approach shows how to use more flexible generative models , such as DLVMs , provides an efficient optimization procedure and allows for keeping any deep discriminative architecture unchanged . 2 BACKGROUND AND NOTATION . We define the random variable x = ( x1 , . . . , xp ) ∈ X which takes values in a p-dimensional feature space X = X1 × · · · × Xp . There is a corresponding ( possibly vector valued ) response variable y ∈ Y . A missing process obscures parts of x resulting in the mask random variable s ∈ { 0 , 1 } p where sj = { 1 if xj observed , 0 if xj missing . ( 3 ) We let obs ( s ) denote the indices of the non-zero entries of s and miss ( s ) denote the indices of the zero-entries of s , such that xobs ( s ) is the set of observed elements of x , and xmiss ( s ) is the set of missing elements of x . For simplicity we will omit the s and write xobs , xmiss respectively , whenever the context is clear . Following ( Yoon et al. , 2018 ; Le Morvan et al. , 2020b ; Ghalebikesabi et al. , 2021 ) we define the incomplete random variable x̃ = ( x̃1 , . . . , x̃p ) ∈ X̃ which takes values in X̃ = ( X1∪ { na } ) ×· · ·× ( Xp ∪ { na } ) where missing values are represented by the symbol na , such that na · xj = na and na · 0 = 0 . We typically only have access to x̃ , where x̃ = x s + na ( 1− s ) ( 4 ) and is the Hadamard product . Finally , we are given n i.i.d . copies of the random variables x̃ and y which we collect in a dataset D = { x̃i , yi } ni=1 , or alternatively D = { xobsi , yi } n i=1 . We make the additional assumption that the data are missing at random ( MAR , see e.g . Seaman et al. , 2013 ; Little & Rubin , 2019 ) . Specifically , this means that we assume that s and xmiss are independent given xobs . This assumption allows to avoid to explicitly model the missing mechanism . Our approach could be extended beyond MAR by using a generative model fit for this purpose , like the not-MIWAE of Ipsen et al . ( 2021 ) , or the deep pattern-set mixture of Ghalebikesabi et al . ( 2021 ) . 2.1 CHALLENGES WHEN TRAINING DISRIMINATIVE MODELS WITH MISSING DATA . Our predictive model will be defined through a mapping fφ : X → H used to parameterize a conditional distribution pφ ( y|x ) = Ψ ( y|fφ ( x ) ) . ( 5 ) Here ( Ψ ( ·|η ) ) η∈H is a parametric family of distributions over the outcome space Y , such as a Gaussian distribution in regression or a categorical distribution in classification . Under the maximumlikelihood framework ( or equivalently the logarithmic scoring rule ) , an optimal mapping f∗φ within a class F = ( fφ ) φ∈Φ parameterized by φ ∈ Φ is defined as f∗φ ∈ arg max fφ∈F Ep∗ ( x , y ) [ log Ψ ( y|fφ ( x ) ) ] , ( 6 ) where p∗ ( x , y ) is the true data generating distribution . When the data are complete , this is typically approximated by maximizing the log likelihood of the parameters φ given the data Dtrain , ` ( φ ) = ∑ ( x , y ) ∈Dtrain log pφ ( y|x ) . ( 7 ) When the covariates have missing values the log-likelihood can not be maximized directly as the likelihood is not defined since pφ ( y|x ) depends on the full input vector . The simplest approach is instead to learn separate mappings for each missing pattern . This reduces the amount of data available for the training of each mapping and does not scale well with the dimensionality of the input as in general 2p networks need to be trained . An often used approach is instead to impute the missing values using a single imputation ( for instance , using Epθ ( xmiss|xobs ) [ xmiss ] or an approximation of it ) , and then map the concatenation of observed and imputed values to approximate the conditional distribution pφ ( y|xobs ) ≈ pφ ( y|x = ( xobs , Epθ ( xmiss|xobs ) [ x miss ] ) ) = Ψ ( y|fφ ( xobs , Epθ ( xmiss|xobs ) [ x miss ] ) ) . ( 8 ) Instead of Epθ ( xmiss|xobs ) [ xmiss ] , which is the optimal imputation under the mean squared error , one could use any kind of imputation , for instance using a constant , or the mean . This general approach is adequately called impute-then-regress by Bertsimas et al . ( 2021 ) and Le Morvan et al . ( 2021 ) . While it will be Bayes-consistent given a powerful enough classifier ( Le Morvan et al. , 2021 ) , it leads to biased parameter estimates compared to having complete data ( Bertsimas et al. , 2021 ) . This is illustrated in figure 2 where the optimal ( under the mean squared error ) single imputations are placed entirely within one of the classes , requiring a biased decision surface to properly reflect the class label probabilities for the records with missing values . From a probabilistic perspective , computing pφ ( y|xobs ) requires marginalizing over the missing features as in equation ( 1 ) , that is pφ ( y|xobs ) = Epθ ( xmiss|xobs ) [ pφ ( y|x obs , xmiss ) ] . ( 9 ) Notice the difference to single imputation since , in general Epθ ( xmiss|xobs ) [ pφ ( y|x obs , xmiss ) ] 6= pφ ( y|x = ( xobs , Epθ ( xmiss|xobs ) [ x miss ] ) ) , ( 10 ) except in some pathological cases ( e.g . when pφ is linear or pθ is a Dirac distribution ) . Therefore , if the discriminative model is nonlinear and the generative model is complex enough , then using single imputation will give a very different result than marginalising over the missing features . This is exemplified in figure 2 , where a classifier trained using the optimal imputation finds a solution that is very different to what would be found by one trained without missing data . Our technique , marginalizing the missing values , allows to recover the same decision surface as a classifier trained without missing data . | This paper approaches the problem of supervised learning with missing data. The authors propose a probabilistic approach by jointly modeling the observed data, missing data and outcomes. The main contribution of this model is that they rely on deep generative models which seems to be an improvement from previous methods based on simpler generative models like mixture of Gaussians, for example. With the proposed model, the authors derive an optimization problem that consists in optimizing the discriminative model (classifier) and the generative model simultaneously. For the training and testing phases, the proposed algorithm considers multiple imputations of missing data, which is demonstrated to be superior to single imputation methods. Experimental results on small and simple datasets (2D datasets, MNIST digits-Fashion and regression) are nicely presented and analyzed. | SP:cd8eaee441e33312233c6d6d41142be9e6b59b9d |
Graph Barlow Twins: A self-supervised representation learning framework for graphs | 1 INTRODUCTION . Graph representation learning has been intensively studied for the last few years , having proposed various architectures and layers , like GCN ( Kipf & Welling , 2017 ) , GAT ( Veličković et al. , 2018 ) , GraphSAGE ( Hamilton et al. , 2017 ) etc . A substantial part of these methods was introduced in the semi-supervised learning paradigm , which requires the existence of expensive labeled data ( e.g . node labels or whole graph labels ) . To overcome this , the research community has been exploring unsupervised learning methods for graphs . This resulted in a variety of different approaches including : shallow ones ( DeepWalk ( Perozzi et al. , 2014 ) , Node2vec ( Grover & Leskovec , 2016 ) , LINE ( Tang et al. , 2015 ) ) that ignore the feature attribute richness , focusing only on the structural graph information ; and graph neural network methods ( DGI ( Veličković et al. , 2019 ) , GAE , VGAE ( Kipf & Welling , 2016 ) ) that build representations upon node or graph features , achieving state-ofthe-art performance in those days . Recently self-supervised paradigm is the most emerging branch of unsupervised graph representation learning and gathers current interest and strenuous research effort towards better results . The most prominent methods were developed around the contrastive learning approach , such as GCA ( Zhu et al. , 2020b ) , GraphCL ( You et al. , 2020 ) , GRACE ( Zhu et al. , 2020a ) or DGI ( Veličković et al. , 2019 ) . Although contrastive methods are popular in many machine learning areas , including computer vision and natural language processing , their fundamental limitation is the need for negative samples . Consequently , the sampling procedure for negative examples highly affects the overall quality of the embeddings . In terms of images or texts , the definition of negative samples might seem not that problematic , but in the case of graphs there is no clear intuition . For instance , what is the negative counterpart for a particular node in the graph , should it be a node that is not a direct neighbor , or a node that is in a different graph component ? There are multiple options available , but the right choice strictly dependent on the downstream task . Researchers have already tackled the problem of building so-called negative-sample-free methods . In research being conducted in computer vision they obtained successful results with methods such as BYOL ( Grill et al. , 2020 ) , SimSiam ( Chen & He , 2020 ) or Barlow Twins ( Zbontar et al. , 2021 ) . These models utilize siamese network architectures with various techniques , like gradient stopping , asymmetry or batch and layer normalizations , to prevent collapsing to trivial solutions . Based on BYOL , Thakoor et al . ( 2021 ) proposed the Bootstrapped Representation Learning on Graphs ( BGRL ) framework . It utilizes two graph encoders : an online and a target one . The former one passes the embedding vectors to a predictor network , which tries to predict the embeddings from the target encoder . The loss is measured as the cosine similarity and the gradient is backpropagated only through the online network ( gradient stopping mechanism ) . The target encoder is updated using an exponential moving average of the online encoder weights . Such setup has been shown to produce graph representation vectors that achieve state-of-the-art performance in node classification using various benchmark datasets . Notwithstanding , assuming asymmetry between the network twins ( such as the predictor network , gradient stopping , and a moving average on the weight updates ) the method is conceptually complex . Employing a symmetric network architecture would seem more intuitive and reasonable , hence in this paper , we propose Graph Barlow Twins ( G-BT ) , a self-supervised graph representation learning framework , which computes the embeddings cross-correlation matrix of two distorted views of a single graph . The approach was firstly introduced in image representation learning as the Barlow Twins model ( Zbontar et al. , 2021 ) but was not able to handle graphs . The utilized network architecture is fully symmetric and does not need any special techniques to build non trivial embedding vectors . The distorted graph views are passed through the same encoder which is trained using the backpropagated gradients ( in a symmetrical manner ) . Our main contributions can be summarized as follows : ( I ) We propose a self-supervised graph representation learning framework Graph Barlow Twins . It is built upon the recently proposed Barlow Twins loss , which utilizes the embedding cross-correlation matrix of two distorted views of a graph to optimize the representation vectors . Our framework neither requires using negative samples ( opposed to most other self-supervised approaches ) nor it introduces any kind of asymmetry in the network architecture ( like state-of-the-art BGRL ) . Moreover , our architecture is converges substantially faster than all other state-of-the-art methods . ( II ) We evaluate our framework in node classification tasks : ( 1 ) for 5 smaller benchmark datasets in a transductive setting , ( 2 ) using the ogb-arxiv dataset from the Open Graph Benchmark ( also in the transductive setting ) , ( 3 ) for multiple graphs in the inductive setting using the PPI ( Protein-Protein Interaction ) dataset , and finally ( 4 ) for the large-scale graph dataset ogb-products in the inductive setting . We use both GCN-based encoders as well as a GAT-based one . We observe that our method achieves analogous results compared to state-of-the-art methods . ( III ) We ensure reproducibility by making the code of both our models as well as experimental pipeline available ( currently attached in the supplementary materials ) . 2 RELATED WORKS . Self-supervised learning The idea of self-supervised learning ( SSL ) has a long history . Introduced in the early work of Schmidhuber ( Schmidhuber , 1990 ) has more than 30 years of exploration and research now . Recently self-supervised learning was again rediscovered and found a broad interest , especially in computer vision and natural language processing . One of the most prominent SSL methods for image representation learning , Bootstrap Your Own Latent , BYOL ( Grill et al. , 2020 ) , performs on par or better than the current state of the art on both transfer and semi-supervised benchmarks . It relies on two neural networks that interact and learn from each other . From an augmented view of an image , it trains the first one to predict the target network representation of the same image under a different view . At the same time , it updates the second network with a slow-moving average of the first network . Another approach to image representation SSL implements simple siamese networks , namely SimSiam ( Chen & He , 2020 ) . It achieves comparative results while not demanding negative samples , large batches , nor momentum encoders . Authors emphasize collapsing solutions for the loss and structure but show how a stop-gradient operation plays an essential role in preventing it . Recent method , Barlow Twins ( Zbontar et al. , 2021 ) , advances the SSL field with a new objective function that naturally avoids collapses by measuring the cross-correlation matrix between the outputs of two twin , identical networks fed with distorted versions of a sample , and makes it as close to the identity matrix as possible . Representations of distorted versions of samples are then expected to be similar , reducing the redundancy between them . What differentiates the method is that it does not require large batches or asymmetry between the network twins . It outperforms previous methods on ImageNet for semi-supervised classification . Graph representation learning Learning the representation also spreads to other domains . The graph embedding problem has also attracted much attention from the research community worldwide in recent years . Plenty of methods have been developed , each focused on a different aspect of network embeddings , such as proximity , structure , attributes , learning paradigm or scalability . There exist plenty of shallow methods , among others DeepWalk ( Perozzi et al. , 2014 ) , Node2vec ( Grover & Leskovec , 2016 ) or LINE ( Tang et al. , 2015 ) , that use a simple notion of graph coding through random walks or on encoder-decoder objectives that optimize first and second-order node similarity . More complex graph neural networks , such as GCN ( Kipf & Welling , 2017 ) or GraphSAGE ( Hamilton et al. , 2017 ) implements the basic encoder algorithm with various neighborhood aggregation . Following the extension , graph attention network GAT ( Veličković et al. , 2018 ) leverages masked self-attentional layers to address the shortcomings of graph convolutions and their troublesome approximations . Self-supervised graph representation learning Inspired by the success of contrastive methods in vision and NLP , the procedures were also adapted to graphs . Early DGI ( Veličković et al. , 2019 ) employs GNN to learn node embeddings and obtains the graph embedding via a readout function and maximizes the mutual information between node embeddings and the graph embedding by discriminating nodes in the original graph from nodes in a corrupted graph . GCA ( Zhu et al. , 2020b ) studied various augmentation procedures . GRACE ( Zhu et al. , 2020a ) creates two augmented versions of a graph , pulls together the representation of the same node in both graphs , and pushes apart representations of every other node . Recent GraphCL ( You et al. , 2020 ) method is another example of representative approach using contrastive learning . All the previous methods use negative sampling approaches for the embedding optimization , yet such setting has a high complexity . To overcome this , BGRL ( Thakoor et al. , 2021 ) proposed to use an approach that does not rely on negative samples . It uses two kinds of encoder networks ( online and target ) , introducing a nonintuitive asymmetric pipeline architecture , but provides state-of-the-art SSL results . Moreover , it relies on several techniques to prevent trivial solutions ( gradient stopping , momentum encoder ) . A concurrent approach to BGRL is DGB ( Che et al. , 2020 ) . 3 PROPOSED FRAMEWORK . Motivated by the emerging self-supervised learning paradigm and its recent applications in graph representation learning ( BGRL ( Thakoor et al. , 2021 ) ) , we propose Graph Barlow Twins – a framework that builds node embeddings using a symmetric network architecture and an empirical cross-correlation based loss function . The overall pipeline of our framework is shown in Figure 1 . The consecutive processing steps can be described as follows : Graph data We represent a graph G with nodes V and edges E as the tuple : ( X , A ) , where X ∈ R|V|×k is the node feature matrix and k is the feature dimensionality ; A ∈ { 0 , 1 } |V|×|V| is the adjacency matrix , such that Ai , j = 1 iff ( i , j ) ∈ E . In the general case , a graph could also have associated edge features or graph level features , but for simplicity we omit those here . Nevertheless , these could also be used in our framework , as long as the encoder can make use of such features . Generating graph views via augmentation Following other works ( Thakoor et al. , 2021 ; Zhu et al. , 2020b ; You et al. , 2020 ; Zhu et al. , 2020a ) , we select two kinds of augmentations – edge dropping and node feature masking – and generate two views of the input graph G ( 1 ) and G ( 2 ) . In the edge dropping case , we remove edges according to a generated mask of size |E| ( number of edges in the graph ) with elements sampled from the Bernoulli distribution B ( 1− pA ) . When it comes to masking node features , we employ a similar scheme and generate a mask of size k also sampled from the Bernoulli distribution B ( 1 − pX ) . Note that we mask node features at the scale of the whole graph , i.e . the same features are masked for each node . Other works apply different augmentation parameters pX , pA for each generated view , but as our framework is fully symmetrical , we postulate that it is enough to use the same parameters to generate both augmentations ( see Section 4.4 ) . Encoder network The main component of the proposed framework is the encoder network fθ : G → R|V|×d . It takes an augmented graph as the input and computes ( in our case ) a ddimensional representation vector for each node in the graph . Note that we do not specify any particular encoder network and one may use even encoders that construct embeddings for edges or whole graphs . In our experiments , we will show the application of GCN ( Kipf & Welling , 2017 ) and GAT ( Veličković et al. , 2018 ) based encoder networks . Both augmented graph views G ( 1 ) , G ( 2 ) are passed through the same encoder , resulting in two embedding matrices Z ( 1 ) and Z ( 2 ) , respectively . The original Barlow Twins method specified also a projector network ( implemented as an MLP ) to reduce high embedding dimensionality ( of the ResNet encoder ) . Our approach eliminates that step as it uses GNNs with low dimensional embeddings . Loss function In our work , we propose to use a negative-sample-free loss function to train the encoder network . We first normalize the embedding matricesZ ( 1 ) andZ ( 2 ) along the batch dimension ( a mean of zero and a standard deviation equal to one ) , and then we compute the empirical crosscorrelation matrix C ∈ Rd×d : Cij = ∑ b Z ( 1 ) b , i Z ( 2 ) b , j√∑ b ( Z ( 1 ) b , i ) 2 √∑ b ( Z ( 2 ) b , j ) 2 , ( 1 ) where b are the batch indexes and i , j are the indexes of embeddings . Such setting was originally proposed under the name Barlow Twins . Neuroscientist H. Barlow ’ s redundancy-reduction principle has motivated many methods both in supervised and unsupervised learning ( Deco & Parra , 1997 ; Schmidhuber et al. , 1996 ; Ballé et al. , 2017 ) . Recently , Zbontar et al . ( 2021 ) has employed this principle to build a self-supervised image representation learning algorithm ( we bring this idea to the domain of graph-structured data ) . The cross-correlation matrix C is optimized by the Barlow Twins loss function LBT ( see Equation 2 ) to be equal to the identity matrix . The loss is composed of two parts : ( I ) the invariance term and ( II ) the redundancy reduction term . The first one forces the on diagonal elements Cii to be equal to one , hence making the embeddings invariant to the applied augmentations . The second term optimizes the off-diagonal elements Cij to be equal to zero – this results in decorrelated components of the embedding vectors . LBT = ∑ i ( 1− Cii ) 2 + λ ∑ i ∑ j 6=i Cij2 ( 2 ) The λ > 0 parameter defines the trade-off between the invariance and redundancy reduction terms when optimizing the overall loss function . In Tsai et al . ( 2021 ) , the authors proposed to use λ = 1d , which we employ in our experimental setting . Otherwise , one can perform a simple grid search to find the best λ value in a particular experiment . Please note , that in such setting the gradient is symmetrically backpropagated through the encoder network . We do not rely on any special techniques , like momentum encoders , gradient stopping , or predictor networks . In preliminary experiments , we also investigated the Hilbert-Schmidt Independence Criterion ( due to its relation to the Barlow Twins objective ( Tsai et al. , 2021 ) ) , but we did not observe any performance gain . | The paper applies the recently proposed self-supervised learning method Barlow-Twins to graph structured data. For constructing the augmented version of a graph, previous methods such as edge-dropping or feature masking are used. The paper conducts experimental evaluation on datasets of various scales on both transductive and inductive setting. | SP:54c599a6476212857ac5d5871c361e31a78b7100 |
Graph Barlow Twins: A self-supervised representation learning framework for graphs | 1 INTRODUCTION . Graph representation learning has been intensively studied for the last few years , having proposed various architectures and layers , like GCN ( Kipf & Welling , 2017 ) , GAT ( Veličković et al. , 2018 ) , GraphSAGE ( Hamilton et al. , 2017 ) etc . A substantial part of these methods was introduced in the semi-supervised learning paradigm , which requires the existence of expensive labeled data ( e.g . node labels or whole graph labels ) . To overcome this , the research community has been exploring unsupervised learning methods for graphs . This resulted in a variety of different approaches including : shallow ones ( DeepWalk ( Perozzi et al. , 2014 ) , Node2vec ( Grover & Leskovec , 2016 ) , LINE ( Tang et al. , 2015 ) ) that ignore the feature attribute richness , focusing only on the structural graph information ; and graph neural network methods ( DGI ( Veličković et al. , 2019 ) , GAE , VGAE ( Kipf & Welling , 2016 ) ) that build representations upon node or graph features , achieving state-ofthe-art performance in those days . Recently self-supervised paradigm is the most emerging branch of unsupervised graph representation learning and gathers current interest and strenuous research effort towards better results . The most prominent methods were developed around the contrastive learning approach , such as GCA ( Zhu et al. , 2020b ) , GraphCL ( You et al. , 2020 ) , GRACE ( Zhu et al. , 2020a ) or DGI ( Veličković et al. , 2019 ) . Although contrastive methods are popular in many machine learning areas , including computer vision and natural language processing , their fundamental limitation is the need for negative samples . Consequently , the sampling procedure for negative examples highly affects the overall quality of the embeddings . In terms of images or texts , the definition of negative samples might seem not that problematic , but in the case of graphs there is no clear intuition . For instance , what is the negative counterpart for a particular node in the graph , should it be a node that is not a direct neighbor , or a node that is in a different graph component ? There are multiple options available , but the right choice strictly dependent on the downstream task . Researchers have already tackled the problem of building so-called negative-sample-free methods . In research being conducted in computer vision they obtained successful results with methods such as BYOL ( Grill et al. , 2020 ) , SimSiam ( Chen & He , 2020 ) or Barlow Twins ( Zbontar et al. , 2021 ) . These models utilize siamese network architectures with various techniques , like gradient stopping , asymmetry or batch and layer normalizations , to prevent collapsing to trivial solutions . Based on BYOL , Thakoor et al . ( 2021 ) proposed the Bootstrapped Representation Learning on Graphs ( BGRL ) framework . It utilizes two graph encoders : an online and a target one . The former one passes the embedding vectors to a predictor network , which tries to predict the embeddings from the target encoder . The loss is measured as the cosine similarity and the gradient is backpropagated only through the online network ( gradient stopping mechanism ) . The target encoder is updated using an exponential moving average of the online encoder weights . Such setup has been shown to produce graph representation vectors that achieve state-of-the-art performance in node classification using various benchmark datasets . Notwithstanding , assuming asymmetry between the network twins ( such as the predictor network , gradient stopping , and a moving average on the weight updates ) the method is conceptually complex . Employing a symmetric network architecture would seem more intuitive and reasonable , hence in this paper , we propose Graph Barlow Twins ( G-BT ) , a self-supervised graph representation learning framework , which computes the embeddings cross-correlation matrix of two distorted views of a single graph . The approach was firstly introduced in image representation learning as the Barlow Twins model ( Zbontar et al. , 2021 ) but was not able to handle graphs . The utilized network architecture is fully symmetric and does not need any special techniques to build non trivial embedding vectors . The distorted graph views are passed through the same encoder which is trained using the backpropagated gradients ( in a symmetrical manner ) . Our main contributions can be summarized as follows : ( I ) We propose a self-supervised graph representation learning framework Graph Barlow Twins . It is built upon the recently proposed Barlow Twins loss , which utilizes the embedding cross-correlation matrix of two distorted views of a graph to optimize the representation vectors . Our framework neither requires using negative samples ( opposed to most other self-supervised approaches ) nor it introduces any kind of asymmetry in the network architecture ( like state-of-the-art BGRL ) . Moreover , our architecture is converges substantially faster than all other state-of-the-art methods . ( II ) We evaluate our framework in node classification tasks : ( 1 ) for 5 smaller benchmark datasets in a transductive setting , ( 2 ) using the ogb-arxiv dataset from the Open Graph Benchmark ( also in the transductive setting ) , ( 3 ) for multiple graphs in the inductive setting using the PPI ( Protein-Protein Interaction ) dataset , and finally ( 4 ) for the large-scale graph dataset ogb-products in the inductive setting . We use both GCN-based encoders as well as a GAT-based one . We observe that our method achieves analogous results compared to state-of-the-art methods . ( III ) We ensure reproducibility by making the code of both our models as well as experimental pipeline available ( currently attached in the supplementary materials ) . 2 RELATED WORKS . Self-supervised learning The idea of self-supervised learning ( SSL ) has a long history . Introduced in the early work of Schmidhuber ( Schmidhuber , 1990 ) has more than 30 years of exploration and research now . Recently self-supervised learning was again rediscovered and found a broad interest , especially in computer vision and natural language processing . One of the most prominent SSL methods for image representation learning , Bootstrap Your Own Latent , BYOL ( Grill et al. , 2020 ) , performs on par or better than the current state of the art on both transfer and semi-supervised benchmarks . It relies on two neural networks that interact and learn from each other . From an augmented view of an image , it trains the first one to predict the target network representation of the same image under a different view . At the same time , it updates the second network with a slow-moving average of the first network . Another approach to image representation SSL implements simple siamese networks , namely SimSiam ( Chen & He , 2020 ) . It achieves comparative results while not demanding negative samples , large batches , nor momentum encoders . Authors emphasize collapsing solutions for the loss and structure but show how a stop-gradient operation plays an essential role in preventing it . Recent method , Barlow Twins ( Zbontar et al. , 2021 ) , advances the SSL field with a new objective function that naturally avoids collapses by measuring the cross-correlation matrix between the outputs of two twin , identical networks fed with distorted versions of a sample , and makes it as close to the identity matrix as possible . Representations of distorted versions of samples are then expected to be similar , reducing the redundancy between them . What differentiates the method is that it does not require large batches or asymmetry between the network twins . It outperforms previous methods on ImageNet for semi-supervised classification . Graph representation learning Learning the representation also spreads to other domains . The graph embedding problem has also attracted much attention from the research community worldwide in recent years . Plenty of methods have been developed , each focused on a different aspect of network embeddings , such as proximity , structure , attributes , learning paradigm or scalability . There exist plenty of shallow methods , among others DeepWalk ( Perozzi et al. , 2014 ) , Node2vec ( Grover & Leskovec , 2016 ) or LINE ( Tang et al. , 2015 ) , that use a simple notion of graph coding through random walks or on encoder-decoder objectives that optimize first and second-order node similarity . More complex graph neural networks , such as GCN ( Kipf & Welling , 2017 ) or GraphSAGE ( Hamilton et al. , 2017 ) implements the basic encoder algorithm with various neighborhood aggregation . Following the extension , graph attention network GAT ( Veličković et al. , 2018 ) leverages masked self-attentional layers to address the shortcomings of graph convolutions and their troublesome approximations . Self-supervised graph representation learning Inspired by the success of contrastive methods in vision and NLP , the procedures were also adapted to graphs . Early DGI ( Veličković et al. , 2019 ) employs GNN to learn node embeddings and obtains the graph embedding via a readout function and maximizes the mutual information between node embeddings and the graph embedding by discriminating nodes in the original graph from nodes in a corrupted graph . GCA ( Zhu et al. , 2020b ) studied various augmentation procedures . GRACE ( Zhu et al. , 2020a ) creates two augmented versions of a graph , pulls together the representation of the same node in both graphs , and pushes apart representations of every other node . Recent GraphCL ( You et al. , 2020 ) method is another example of representative approach using contrastive learning . All the previous methods use negative sampling approaches for the embedding optimization , yet such setting has a high complexity . To overcome this , BGRL ( Thakoor et al. , 2021 ) proposed to use an approach that does not rely on negative samples . It uses two kinds of encoder networks ( online and target ) , introducing a nonintuitive asymmetric pipeline architecture , but provides state-of-the-art SSL results . Moreover , it relies on several techniques to prevent trivial solutions ( gradient stopping , momentum encoder ) . A concurrent approach to BGRL is DGB ( Che et al. , 2020 ) . 3 PROPOSED FRAMEWORK . Motivated by the emerging self-supervised learning paradigm and its recent applications in graph representation learning ( BGRL ( Thakoor et al. , 2021 ) ) , we propose Graph Barlow Twins – a framework that builds node embeddings using a symmetric network architecture and an empirical cross-correlation based loss function . The overall pipeline of our framework is shown in Figure 1 . The consecutive processing steps can be described as follows : Graph data We represent a graph G with nodes V and edges E as the tuple : ( X , A ) , where X ∈ R|V|×k is the node feature matrix and k is the feature dimensionality ; A ∈ { 0 , 1 } |V|×|V| is the adjacency matrix , such that Ai , j = 1 iff ( i , j ) ∈ E . In the general case , a graph could also have associated edge features or graph level features , but for simplicity we omit those here . Nevertheless , these could also be used in our framework , as long as the encoder can make use of such features . Generating graph views via augmentation Following other works ( Thakoor et al. , 2021 ; Zhu et al. , 2020b ; You et al. , 2020 ; Zhu et al. , 2020a ) , we select two kinds of augmentations – edge dropping and node feature masking – and generate two views of the input graph G ( 1 ) and G ( 2 ) . In the edge dropping case , we remove edges according to a generated mask of size |E| ( number of edges in the graph ) with elements sampled from the Bernoulli distribution B ( 1− pA ) . When it comes to masking node features , we employ a similar scheme and generate a mask of size k also sampled from the Bernoulli distribution B ( 1 − pX ) . Note that we mask node features at the scale of the whole graph , i.e . the same features are masked for each node . Other works apply different augmentation parameters pX , pA for each generated view , but as our framework is fully symmetrical , we postulate that it is enough to use the same parameters to generate both augmentations ( see Section 4.4 ) . Encoder network The main component of the proposed framework is the encoder network fθ : G → R|V|×d . It takes an augmented graph as the input and computes ( in our case ) a ddimensional representation vector for each node in the graph . Note that we do not specify any particular encoder network and one may use even encoders that construct embeddings for edges or whole graphs . In our experiments , we will show the application of GCN ( Kipf & Welling , 2017 ) and GAT ( Veličković et al. , 2018 ) based encoder networks . Both augmented graph views G ( 1 ) , G ( 2 ) are passed through the same encoder , resulting in two embedding matrices Z ( 1 ) and Z ( 2 ) , respectively . The original Barlow Twins method specified also a projector network ( implemented as an MLP ) to reduce high embedding dimensionality ( of the ResNet encoder ) . Our approach eliminates that step as it uses GNNs with low dimensional embeddings . Loss function In our work , we propose to use a negative-sample-free loss function to train the encoder network . We first normalize the embedding matricesZ ( 1 ) andZ ( 2 ) along the batch dimension ( a mean of zero and a standard deviation equal to one ) , and then we compute the empirical crosscorrelation matrix C ∈ Rd×d : Cij = ∑ b Z ( 1 ) b , i Z ( 2 ) b , j√∑ b ( Z ( 1 ) b , i ) 2 √∑ b ( Z ( 2 ) b , j ) 2 , ( 1 ) where b are the batch indexes and i , j are the indexes of embeddings . Such setting was originally proposed under the name Barlow Twins . Neuroscientist H. Barlow ’ s redundancy-reduction principle has motivated many methods both in supervised and unsupervised learning ( Deco & Parra , 1997 ; Schmidhuber et al. , 1996 ; Ballé et al. , 2017 ) . Recently , Zbontar et al . ( 2021 ) has employed this principle to build a self-supervised image representation learning algorithm ( we bring this idea to the domain of graph-structured data ) . The cross-correlation matrix C is optimized by the Barlow Twins loss function LBT ( see Equation 2 ) to be equal to the identity matrix . The loss is composed of two parts : ( I ) the invariance term and ( II ) the redundancy reduction term . The first one forces the on diagonal elements Cii to be equal to one , hence making the embeddings invariant to the applied augmentations . The second term optimizes the off-diagonal elements Cij to be equal to zero – this results in decorrelated components of the embedding vectors . LBT = ∑ i ( 1− Cii ) 2 + λ ∑ i ∑ j 6=i Cij2 ( 2 ) The λ > 0 parameter defines the trade-off between the invariance and redundancy reduction terms when optimizing the overall loss function . In Tsai et al . ( 2021 ) , the authors proposed to use λ = 1d , which we employ in our experimental setting . Otherwise , one can perform a simple grid search to find the best λ value in a particular experiment . Please note , that in such setting the gradient is symmetrically backpropagated through the encoder network . We do not rely on any special techniques , like momentum encoders , gradient stopping , or predictor networks . In preliminary experiments , we also investigated the Hilbert-Schmidt Independence Criterion ( due to its relation to the Barlow Twins objective ( Tsai et al. , 2021 ) ) , but we did not observe any performance gain . | This paper proposed a self-supervised learning framework for graph representation learning based on a cross-correlation-based loss function. In the proposed framework, two views of the input graph obtained by augmentation methods are passed through the same encoder to compute two embedding matrices, then Barlow Twins loss is used to compute the loss according to the embedding matrices. The main contribution of this paper lies in that it adapted Barlow Twins from vision to graph representation learning field and evaluated the performance of this self-supervised framework in multiple node classification tasks. The proposed method achieved analogous results compared to SOTA methods with lower time and space complexity. | SP:54c599a6476212857ac5d5871c361e31a78b7100 |
Logarithmic landscape and power-law escape rate of SGD | Stochastic gradient descent ( SGD ) undergoes complicated multiplicative noise for the mean-square loss . We use this property of the SGD noise to derive a stochastic differential equation ( SDE ) with simpler additive noise by performing a random time change . In the SDE , the loss gradient is replaced by the logarithmized loss gradient . By using this formalism , we obtain the escape rate formula from a local minimum , which is determined not by the loss barrier height ∆L = L ( θs ) − L ( θ∗ ) between a minimum θ∗ and a saddle θs but by the logarithmized loss barrier height ∆ logL = log [ L ( θs ) /L ( θ∗ ) ] . Our escape-rate formula strongly depends on the typical magnitude h∗ and the number n of the outlier eigenvalues of the Hessian . This result explains an empirical fact that SGD prefers flat minima with low effective dimensions , which gives an insight into implicit biases of SGD . 1 INTRODUCTION . Deep learning has achieved breakthroughs in various applications in artificial intelligence such as image classification ( Krizhevsky et al. , 2012 ; LeCun et al. , 2015 ) , speech recognition ( Hinton et al. , 2012 ) , natural language processing ( Collobert & Weston , 2008 ) , and natural sciences ( Iten et al. , 2020 ; Bapst et al. , 2020 ; Seif et al. , 2021 ) . Such unparalleled success of deep learning hinges crucially on stochastic gradient descent ( SGD ) or its variants as an efficient training algorithm . Although the loss landscape is highly nonconvex , the SGD often succeeds in finding a global minimum . It has been argued that the SGD noise plays a key role in escaping from local minima ( Jastrzȩbski et al. , 2017 ; Wu et al. , 2018 ; 2020 ; Zhu et al. , 2019 ; Meng et al. , 2020 ; Xie et al. , 2021 ; Liu et al. , 2021 ) . It has also been suggested that SGD has an implicit bias that is beneficial for generalization . That is , SGD may help the network to find flat minima , which are considered to imply good generalization ( Keskar et al. , 2017 ; Hoffer et al. , 2017 ; Wu et al. , 2018 ) . How and why does SGD help the network escape from bad local minima and find flat minima ? These questions have been addressed in several works , and it is now recognized that the SGD noise strength and structure importantly affect the efficiency of escape from local minima . Our work follows this line of research , and add new theoretical perspectives . In physics and chemistry , escape from a local minimum of the ( free ) energy landscape due to thermal noise at temperature T has been thoroughly discussed ( Kramers , 1940 ; Langer , 1969 ) . When the ( free ) energy barrier is given by ∆E , the escape rate is proportional to e−∆E/T , which is known as the Arrhenius law . By analogy , in machine learning , escape from a local minimum of the loss function is considered to be determined by the loss barrier height ∆L = L ( θs ) −L ( θ∗ ) , where L ( θ ) denotes the loss function at the network parameters θ , θ∗ stands for a local minimum of L ( θ ) , and θs denotes a saddle point that separates θ∗ from other minima . If we assume that the SGD noise is uniform and isotropic , which is often assumed in machine-learning literature ( Jastrzȩbski et al. , 2017 ) , the escape rate is proportional to e−∆L/D , where D denotes the SGD noise strength . In this paper , we show that inhomogeneity of the SGD noise strength brings about drastic modification for the mean-square loss . It turns out that the escape rate is determined by the logarithmized loss barrier height ∆ logL = logL ( θs ) −logL ( θ∗ ) = log [ L ( θs ) /L ( θ∗ ) ] . In other words , the escape rate is determined by not the difference but the ratio of L ( θs ) and L ( θ∗ ) . This result means that even if the loss barrier height ∆L is the same , minima with smaller values of L ( θ∗ ) are more stable . Moreover , given the fact that the eigenvalue spectrum of the Hessian at a minimum consists of a bulk of almost zero eigenvalues and outliers ( Sagun et al. , 2017 ; Papyan , 2019 ) , our escape-rate formula implies that SGD prefers flat minima with a low effective dimension , where the effective dimension is defined as the number of outliers ( MacKay , 1992 ) and flatness is measured by a typical magnitude of outlier eigenvalues ( Keskar et al. , 2017 ) . The previous theories ( Jastrzȩbski et al. , 2017 ; Wu et al. , 2018 ; Zhu et al. , 2019 ; Meng et al. , 2020 ; Xie et al. , 2021 ; Liu et al. , 2021 ) have also successfully explained that SGD prefers flat minima , but not shown the preference of small effective dimensions . The logarithmized loss naturally explains the latter , and sheds light on implicit biases of SGD . Main contributions : We obtain the following main results : • We derive an equation for approximating the SGD noise in Eq . ( 6 ) . Remarkably , the SGD noise strength in the mean-square loss is shown to be proportional to the loss function , which is experimentally verified in Sec . 5.2 . A key ingredient in deriving Eq . ( 6 ) is the decoupling approximation given in Eq . ( 7 ) . This is a novel approximate method introduced in our analysis , and hence we experimentally verify it in Sec . 5.1 . • We derive a novel stochastic differential equation ( SDE ) in Eq . ( 14 ) via a random time change introduced in Eq . ( 13 ) . Although the original SDE ( 4 ) has a multiplicative noise , the transformed SDE ( 14 ) has a simple additive noise with the gradient of the logarithmic loss . This shows the convenience of the logarithmic loss landscape for understanding SGD . • We derive a novel form of SGD escape rate from a local minimum in Eq . ( 16 ) . Remarkably , the escape rate depends on the ratio between L ( θ∗ ) and L ( θs ) . In Sec . 5.3 , we experimentally test the validity of this result for linear regressions . • Our escape rate crucially depends on the flatness and the effective dimension , which shows that SGD has implicit biases towards flat minima with low effective dimension . We also show in Eq . ( 18 ) that a local minimum with an effective dimension n greater than a certain critical value nc becomes unstable . Related works : The role of the SGD noise structure has been discussed in some previous works ( Zhu et al. , 2019 ; Xie et al. , 2021 ; Liu et al. , 2021 ; Meng et al. , 2020 ; Wojtowytsch , 2021 ) . It was pointed out that the anisotropic nature of the SGD noise is important : the SGD noise covariance matrix is aligned with the Hessian of the loss function , which is beneficial for escape from sharp minima ( Zhu et al. , 2019 ; Xie et al. , 2021 ; Liu et al. , 2021 ) . These previous works , however , do not take the inhomogeneity of the SGD noise strength into account , and consequently , escape rates derived there depend exponentially on the loss barrier height , which differs from our formula . Compared with the anisotropy of the SGD noise , the inhomogeneity of the SGD noise strength has been less explored . In ( Meng et al. , 2020 ; Wojtowytsch , 2021 ) , the SGD dynamics under a statedependent noise is discussed . However , in these previous works , the connection between the noise strength and the loss function was not theoretically established , and the logarithmized loss landscape was not discussed . The instability due to large effective dimensions was also not shown . Another recent work ( Pesme et al. , 2021 ) observed that the noise is proportional to the loss for specific simple models . In our paper , such a result is derived for more generic models . Gürbüzbalaban et al . ( 2021 ) showed that SGD will converge to a heavy-tailed stationary distribution due to a multiplicative nature of the SGD noise in a simple linear regression problem . Our paper strengthens this result : we argue that such a heavy-tailed distribution generically appears for the mean-square loss . 2 BACKGROUND . 2.1 SETUP . We consider supervised learning . Let D = { ( x ( µ ) , y ( µ ) ) : µ = 1 , 2 , . . . , N } be the training dataset , where x ( µ ) ∈ Rd denotes a data vector and y ( µ ) ∈ R be its label . The network output for a given input x is denoted by f ( θ , x ) ∈ R , where θ ∈ RP stands for a set of trainable parameters with P being the number of trainable parameters ( Extension to the multi-dimensional label and output is straightforward ) . In this work , we focus on the mean-square loss L ( θ ) = 1 2N N∑ µ=1 [ f ( θ , x ( µ ) ) − y ( µ ) ] 2 = : 1 N N∑ µ=1 ` µ ( θ ) . ( 1 ) The training proceeds through optimization of L ( θ ) . In most machine-learning applications , the optimization is done via SGD or its variants . In SGD , the parameter θk+1 at the time step k + 1 is determined by θk+1 = θk − η∇LBk ( θk ) , LBk ( θ ) = 1 2B ∑ µ∈Bk ` µ ( θ ) , ( 2 ) where η > 0 is the learning rate , Bk ⊂ { 1 , 2 , . . . , N } with |Bk| = B is a mini-batch used at the kth time step , and LBk denotes the mini-batch loss . Since the training dataset D is randomly divided into mini-batches , the dynamics defined by Eq . ( 2 ) is stochastic . When B = N , the full training data samples are used for every iteration . In this case , the dynamics is deterministic and called gradient descent ( GD ) . SGD is interpreted as GD with stochastic noise . By introducing the SGD noise ξk = − [ ∇LBk ( θk ) −∇L ( θk ) ] , Eq . ( 2 ) is rewritten as θk+1 = θk − η∇L ( θk ) + ηξk . ( 3 ) Obviously , 〈ξk〉 = 0 , where the brackets denote the average over possible choices of mini-batches . The noise covariance matrix is defined as Σ ( θk ) : = 〈ξkξTk 〉 . The covariance structure of the SGD noise is important in analyzing the SGD dynamics , which will be discussed in Sec . 3.1 . 2.2 STOCHASTIC DIFFERENTIAL EQUATION FOR SGD . When the parameter update for each iteration is small , which is typically the case when the learning rate η is small enough , we can consider the continuous-time approximation ( Li et al. , 2017 ; Smith & Le , 2018 ) . By introducing a continuous time variable t ∈ R and regarding η as an infinitesimal time step dt , we have a SDE dθt = −∇L ( θt ) dt+ √ ηΣ ( θt ) · dWt , ( 4 ) where dWt ∼ N ( 0 , IP dt ) with In being the n-by-n identity matrix , and the multiplicative noise√ ηΣ ( θt ) · dWt is interpreted as Itô since the noise ξk in Eq . ( 3 ) depends on θk but not on θk+1 . Throughout this work , we consider the continuous-time approximation ( 4 ) with Gaussian noise . In machine learning , the gradient Langevin dynamics ( GLD ) is also considered , in which the isotropic and uniform Gaussian noise is injected into the GD as dθt = −∇L ( θt ) dt+ √ 2DdWt , ( 5 ) where D > 0 corresponds to the noise strength ( it is also called the diffusion coefficient ) ( Sato & Nakagawa , 2014 ; Zhang et al. , 2017b ; Zhu et al. , 2019 ) . The stationary probability distribution PGLD ( θ ) of θ for GLD is given by the Gibbs distribution PGLD ( θ ) ∝ e−L ( θ ) /D . We will see in Sec . 4 that the SGD noise structure , which is characterized by Σ ( θ ) , drastically alters the stationary distribution and the escape rate from a local minimum . | This paper considers the rate at which the SGD iterations will escape the valley around a local minimum. Under some approximation assumptions, this paper shows that the SGD noise covariance is highly structured as it aligns with the Hessian at the local minimum in the immediate vicinity. By considering the Ito SDE with the approximate SGD noise covariance and through a random change of time scale, the the Langevin equation on the loss landscape is transformed to that on the logarithmic loss landscape, but with a simpler additive noise. Such a logarithmized loss landscape is then exploited to derive the escape rate of SGD from a local minimum. | SP:c08b370b56eba4fb58a1ea1cba0f45c9cd3143e7 |
Logarithmic landscape and power-law escape rate of SGD | Stochastic gradient descent ( SGD ) undergoes complicated multiplicative noise for the mean-square loss . We use this property of the SGD noise to derive a stochastic differential equation ( SDE ) with simpler additive noise by performing a random time change . In the SDE , the loss gradient is replaced by the logarithmized loss gradient . By using this formalism , we obtain the escape rate formula from a local minimum , which is determined not by the loss barrier height ∆L = L ( θs ) − L ( θ∗ ) between a minimum θ∗ and a saddle θs but by the logarithmized loss barrier height ∆ logL = log [ L ( θs ) /L ( θ∗ ) ] . Our escape-rate formula strongly depends on the typical magnitude h∗ and the number n of the outlier eigenvalues of the Hessian . This result explains an empirical fact that SGD prefers flat minima with low effective dimensions , which gives an insight into implicit biases of SGD . 1 INTRODUCTION . Deep learning has achieved breakthroughs in various applications in artificial intelligence such as image classification ( Krizhevsky et al. , 2012 ; LeCun et al. , 2015 ) , speech recognition ( Hinton et al. , 2012 ) , natural language processing ( Collobert & Weston , 2008 ) , and natural sciences ( Iten et al. , 2020 ; Bapst et al. , 2020 ; Seif et al. , 2021 ) . Such unparalleled success of deep learning hinges crucially on stochastic gradient descent ( SGD ) or its variants as an efficient training algorithm . Although the loss landscape is highly nonconvex , the SGD often succeeds in finding a global minimum . It has been argued that the SGD noise plays a key role in escaping from local minima ( Jastrzȩbski et al. , 2017 ; Wu et al. , 2018 ; 2020 ; Zhu et al. , 2019 ; Meng et al. , 2020 ; Xie et al. , 2021 ; Liu et al. , 2021 ) . It has also been suggested that SGD has an implicit bias that is beneficial for generalization . That is , SGD may help the network to find flat minima , which are considered to imply good generalization ( Keskar et al. , 2017 ; Hoffer et al. , 2017 ; Wu et al. , 2018 ) . How and why does SGD help the network escape from bad local minima and find flat minima ? These questions have been addressed in several works , and it is now recognized that the SGD noise strength and structure importantly affect the efficiency of escape from local minima . Our work follows this line of research , and add new theoretical perspectives . In physics and chemistry , escape from a local minimum of the ( free ) energy landscape due to thermal noise at temperature T has been thoroughly discussed ( Kramers , 1940 ; Langer , 1969 ) . When the ( free ) energy barrier is given by ∆E , the escape rate is proportional to e−∆E/T , which is known as the Arrhenius law . By analogy , in machine learning , escape from a local minimum of the loss function is considered to be determined by the loss barrier height ∆L = L ( θs ) −L ( θ∗ ) , where L ( θ ) denotes the loss function at the network parameters θ , θ∗ stands for a local minimum of L ( θ ) , and θs denotes a saddle point that separates θ∗ from other minima . If we assume that the SGD noise is uniform and isotropic , which is often assumed in machine-learning literature ( Jastrzȩbski et al. , 2017 ) , the escape rate is proportional to e−∆L/D , where D denotes the SGD noise strength . In this paper , we show that inhomogeneity of the SGD noise strength brings about drastic modification for the mean-square loss . It turns out that the escape rate is determined by the logarithmized loss barrier height ∆ logL = logL ( θs ) −logL ( θ∗ ) = log [ L ( θs ) /L ( θ∗ ) ] . In other words , the escape rate is determined by not the difference but the ratio of L ( θs ) and L ( θ∗ ) . This result means that even if the loss barrier height ∆L is the same , minima with smaller values of L ( θ∗ ) are more stable . Moreover , given the fact that the eigenvalue spectrum of the Hessian at a minimum consists of a bulk of almost zero eigenvalues and outliers ( Sagun et al. , 2017 ; Papyan , 2019 ) , our escape-rate formula implies that SGD prefers flat minima with a low effective dimension , where the effective dimension is defined as the number of outliers ( MacKay , 1992 ) and flatness is measured by a typical magnitude of outlier eigenvalues ( Keskar et al. , 2017 ) . The previous theories ( Jastrzȩbski et al. , 2017 ; Wu et al. , 2018 ; Zhu et al. , 2019 ; Meng et al. , 2020 ; Xie et al. , 2021 ; Liu et al. , 2021 ) have also successfully explained that SGD prefers flat minima , but not shown the preference of small effective dimensions . The logarithmized loss naturally explains the latter , and sheds light on implicit biases of SGD . Main contributions : We obtain the following main results : • We derive an equation for approximating the SGD noise in Eq . ( 6 ) . Remarkably , the SGD noise strength in the mean-square loss is shown to be proportional to the loss function , which is experimentally verified in Sec . 5.2 . A key ingredient in deriving Eq . ( 6 ) is the decoupling approximation given in Eq . ( 7 ) . This is a novel approximate method introduced in our analysis , and hence we experimentally verify it in Sec . 5.1 . • We derive a novel stochastic differential equation ( SDE ) in Eq . ( 14 ) via a random time change introduced in Eq . ( 13 ) . Although the original SDE ( 4 ) has a multiplicative noise , the transformed SDE ( 14 ) has a simple additive noise with the gradient of the logarithmic loss . This shows the convenience of the logarithmic loss landscape for understanding SGD . • We derive a novel form of SGD escape rate from a local minimum in Eq . ( 16 ) . Remarkably , the escape rate depends on the ratio between L ( θ∗ ) and L ( θs ) . In Sec . 5.3 , we experimentally test the validity of this result for linear regressions . • Our escape rate crucially depends on the flatness and the effective dimension , which shows that SGD has implicit biases towards flat minima with low effective dimension . We also show in Eq . ( 18 ) that a local minimum with an effective dimension n greater than a certain critical value nc becomes unstable . Related works : The role of the SGD noise structure has been discussed in some previous works ( Zhu et al. , 2019 ; Xie et al. , 2021 ; Liu et al. , 2021 ; Meng et al. , 2020 ; Wojtowytsch , 2021 ) . It was pointed out that the anisotropic nature of the SGD noise is important : the SGD noise covariance matrix is aligned with the Hessian of the loss function , which is beneficial for escape from sharp minima ( Zhu et al. , 2019 ; Xie et al. , 2021 ; Liu et al. , 2021 ) . These previous works , however , do not take the inhomogeneity of the SGD noise strength into account , and consequently , escape rates derived there depend exponentially on the loss barrier height , which differs from our formula . Compared with the anisotropy of the SGD noise , the inhomogeneity of the SGD noise strength has been less explored . In ( Meng et al. , 2020 ; Wojtowytsch , 2021 ) , the SGD dynamics under a statedependent noise is discussed . However , in these previous works , the connection between the noise strength and the loss function was not theoretically established , and the logarithmized loss landscape was not discussed . The instability due to large effective dimensions was also not shown . Another recent work ( Pesme et al. , 2021 ) observed that the noise is proportional to the loss for specific simple models . In our paper , such a result is derived for more generic models . Gürbüzbalaban et al . ( 2021 ) showed that SGD will converge to a heavy-tailed stationary distribution due to a multiplicative nature of the SGD noise in a simple linear regression problem . Our paper strengthens this result : we argue that such a heavy-tailed distribution generically appears for the mean-square loss . 2 BACKGROUND . 2.1 SETUP . We consider supervised learning . Let D = { ( x ( µ ) , y ( µ ) ) : µ = 1 , 2 , . . . , N } be the training dataset , where x ( µ ) ∈ Rd denotes a data vector and y ( µ ) ∈ R be its label . The network output for a given input x is denoted by f ( θ , x ) ∈ R , where θ ∈ RP stands for a set of trainable parameters with P being the number of trainable parameters ( Extension to the multi-dimensional label and output is straightforward ) . In this work , we focus on the mean-square loss L ( θ ) = 1 2N N∑ µ=1 [ f ( θ , x ( µ ) ) − y ( µ ) ] 2 = : 1 N N∑ µ=1 ` µ ( θ ) . ( 1 ) The training proceeds through optimization of L ( θ ) . In most machine-learning applications , the optimization is done via SGD or its variants . In SGD , the parameter θk+1 at the time step k + 1 is determined by θk+1 = θk − η∇LBk ( θk ) , LBk ( θ ) = 1 2B ∑ µ∈Bk ` µ ( θ ) , ( 2 ) where η > 0 is the learning rate , Bk ⊂ { 1 , 2 , . . . , N } with |Bk| = B is a mini-batch used at the kth time step , and LBk denotes the mini-batch loss . Since the training dataset D is randomly divided into mini-batches , the dynamics defined by Eq . ( 2 ) is stochastic . When B = N , the full training data samples are used for every iteration . In this case , the dynamics is deterministic and called gradient descent ( GD ) . SGD is interpreted as GD with stochastic noise . By introducing the SGD noise ξk = − [ ∇LBk ( θk ) −∇L ( θk ) ] , Eq . ( 2 ) is rewritten as θk+1 = θk − η∇L ( θk ) + ηξk . ( 3 ) Obviously , 〈ξk〉 = 0 , where the brackets denote the average over possible choices of mini-batches . The noise covariance matrix is defined as Σ ( θk ) : = 〈ξkξTk 〉 . The covariance structure of the SGD noise is important in analyzing the SGD dynamics , which will be discussed in Sec . 3.1 . 2.2 STOCHASTIC DIFFERENTIAL EQUATION FOR SGD . When the parameter update for each iteration is small , which is typically the case when the learning rate η is small enough , we can consider the continuous-time approximation ( Li et al. , 2017 ; Smith & Le , 2018 ) . By introducing a continuous time variable t ∈ R and regarding η as an infinitesimal time step dt , we have a SDE dθt = −∇L ( θt ) dt+ √ ηΣ ( θt ) · dWt , ( 4 ) where dWt ∼ N ( 0 , IP dt ) with In being the n-by-n identity matrix , and the multiplicative noise√ ηΣ ( θt ) · dWt is interpreted as Itô since the noise ξk in Eq . ( 3 ) depends on θk but not on θk+1 . Throughout this work , we consider the continuous-time approximation ( 4 ) with Gaussian noise . In machine learning , the gradient Langevin dynamics ( GLD ) is also considered , in which the isotropic and uniform Gaussian noise is injected into the GD as dθt = −∇L ( θt ) dt+ √ 2DdWt , ( 5 ) where D > 0 corresponds to the noise strength ( it is also called the diffusion coefficient ) ( Sato & Nakagawa , 2014 ; Zhang et al. , 2017b ; Zhu et al. , 2019 ) . The stationary probability distribution PGLD ( θ ) of θ for GLD is given by the Gibbs distribution PGLD ( θ ) ∝ e−L ( θ ) /D . We will see in Sec . 4 that the SGD noise structure , which is characterized by Σ ( θ ) , drastically alters the stationary distribution and the escape rate from a local minimum . | This paper studies the behavior of SGD around the minimum. Unlike many other works that simply treat SGD noise as a fixed noise, the authors characterizes the location-dependence of SGD noise, which gives drastically different escaping behavior. By some simplification of the noise covariance matrix, the authors are able to formulate an SGD with isotropic noise with a time change. The new SDE is a dynamics on the log-loss landscape with a simple additive noise. With this result, the escape rate and stationary distributions are derived, depending polynomially on the loss, instead of exponentially. Numerical experiments are conducted to justify the assumptions made in the analysis, and verify the theoretical conclusions in the case of linear regression. | SP:c08b370b56eba4fb58a1ea1cba0f45c9cd3143e7 |
Interventional Black-Box Explanations | 1 INTRODUCTION . The design of deep neural networks ( DNNs ) is built on complex structure of neurons , layers and operations ( e.g. , convolutions , non-linearity and back-propagation ) . These biologically-inspired designs are able to evolve by their own from training data . Their high dimensional parameter space allows learning meaningful semantics from large data distributions and perform well on many tasks . However , makes difficult to capture explanation of their behaviour . This is one of fundamental obstacles for using these models on critical systems . Most popular explanation methods focus on creating saliency maps from classification models to visualize important features ( Selvaraju et al . ( 2017b ) ; Binder et al . ( 2016 ) ; Simonyan et al . ( 2014 ) ) of a predicted class . However , saliency maps are not sufficient to reason on model behaviour and the explanations are not consistent between these methods . It is important to explain what are the mechanisms inside the hidden layers by which the model makes a prediction from an input . In this paper , we address this problem using causal inference . Understanding of cause-effect relations in the DNN architecture is one way to make such black box models transparent and to explain their behaviour when tested on new data . Structural causal models ( SCM ) ( Pearl ( 2009 ) ) and their causal diagrams use interventions in terms of the do-calculus to express these relations . We rely on this framework to explain black box DNNs . We are interested in explaining post-hoc models , i.e. , models after training . Recently , some work have addressed DNN explanations using causality ( Chattopadhyay et al . ( 2019 ) ; O ’ Shaughnessy et al . ( 2020 ) ; Narendra et al . ( 2018 ) ) . These methods concentrate on the data generation process and input-output relations in classification models . Their goal is to explain the effects of changing aspects in input data on model predictions . Our work is different because we focus on the model itself , more precisely , the pre-trained knowledge stored in its parameters . We consider deep convolution models which are widely applied to computer vision tasks and have some applications in speech recognition and natural language processing . A neural network architecture is a form of Directed Acyclic Graph ( DAG ) models , in which neurons ( nodes ) are connected by directed edges form one layer to the next . A causal diagram or SCM was used to summarize the complex structure of DNNs ( Chattopadhyay et al . ( 2019 ) ; Narendra et al . ( 2018 ) ) such that interventions can be applied to explain the effect of a variable of interest on model prediction . The construction of the causal diagram depends on the variables of interest that we would like to understand their effects . Some methods ( Chattopadhyay et al . ( 2019 ) ; O ’ Shaughnessy et al . ( 2020 ) ; Harradon et al . ( 2018 ) ) consider the latent features of a generative model as variables of interest . Narendra et al . ( 2018 ) focused on explaining the effect of the variance in convolution filters of a CNN . In this work , we provide a different view of the causal diagram ( see Fig.1 ) which allows to perform causal reasoning on the entire structure of the DNN . We summarize our contributions as follows . We build a causal graph of a post-hoc DNN . We develop an algorithm , termed interventional black box explanations , to find the causal mechanisms that explain the local and global DNN behaviour on individual samples and across samples , respectively . We show that our explanations can be used to correct or improve the probability of a prediction in test time . We consider , in this work , architectures for image classification . We capture explanations for LeNet and ResNet18 architectures using MNIST and ImageNet data inputs . We show that the explanations obtained by our method can be effectively used to remove noise in the model and improve its performance . Finally , like attribution-based methods , we provide visual explanations of classifier ’ s behaviour computing visual concepts ( semantics ) from the effect variables ( response of causal filters ) . We qualitatively demonstrate that our method captures more informative concepts strongly connected to model ’ s prediction and useful for human interpretability . 2 INTERVENTIONAL BLACK BOX EXPLANATION . We begin our method by paving the way to the construction of the proposed causal diagram for post-hoc DNN ( Section 2.1 ) . In Section 2.2 , we define the causal diagram followed describing of the proposed algorithm ( Section 2.3 ) . 2.1 FORMAL SETTING . We will use DNN and CNN interchangeably in this paper as we focus on classification networks comprising hidden convolution layers , but we keep in mind that our method is generic and can be applied on other architectures . In formal setting , a black-box CNN has two major components : a feature extraction module consisting of n convolution layers followed by a classifier with m fully connected layers . An input is an image X ∈ Rd1×d2×c0 , ( d1 , d2 are the spatial dimensions and c0 is the channels number ) . The subscribe number indicates that this is the input layer l0 . An output y ∈ RK ( K is the number of classes ) is a real-valued vector of predictions indicating the class logits of X . The feature extraction module can be a simple structure of sequential layers ( l1 , .... , ln ) where each layer li consists of ci convolution units ( we call them nodes ) and connected with all ci+1 units in layer li+1 . This structure can be more complex in state-of-the-art DNNs , such as ResNets , where skip connections are designed to jump over some layers ( see Fig 1 ( a ) ) . For simplicity , we assume non-linearity , batch normalization and pooling ( if they exist ) as parts of every node in layer li . A layer li in the classifier module consists of vi neurons , and every neuron is densely connected to all vi+1 neurons in layer li+1 . The neurons are naturally nodes in those densely connected layers . We will substitute ci by vi to unify the notations . Consequently , we summarize the graphical structure of the DNN by GM = ( { v0 , ... , vn , .... vN } ) , { E0,1 , ... , EN−1 , N } , with N = n + m and Ei , i+1 is edge vector connecting the nodes in layer li to li+1 as shown in Fig 1 ( b ) . In post-hoc DNNs , each path communicates the information from layer li to layer li+1 in a single direction starting from the input and passing forward to the output ( prediction ) . It is an Acyclic Directed Graph ( DAG ) . Note that this assumption is not valid on the network during training because of backpropagation . The pre-trained knowledge captured from training data is stored in the DNN ’ s parameters ( i.e. , weights ) . We define wki ∈ Rvi as the weight vector connected to the k-th node in layer li+1 . We show in the next section the motivation for this notation . 2.2 THE CAUSAL DIAGRAM OF POST-HOC DNNS . A causal diagram is a graphical model that summarizes an existing knowledge where the nodes represent the variables of interest and edges represent the causal relationships between variables ( Greenland & Pearl ( 2011 ) ) . A causal explanation consists of a causal diagram and symbolic queries defined by interventions , or do-calculus , to express cause-effect relations ( Pearl & Mackenzie ( 2018 ) ) . For post-hoc DNNs , knowledge is encapsulated in its weights during the training phase . For CNNs , the filters of convolution layers express the pieces of knowledge that construct the entangled feature space and concepts of an input data . There are too many parameters to explore , and the goal of the causal graph is to uncover the important ones that explain the behaviour of the model . To explain the effects of these parameters on predictions , we propose a causal diagram as the example shown in Fig . 1 ( c ) . In this graph , we distinguish between two types of variables : parameters and features ( network nodes ) . Our variables of interest are : parameters W = ( W0 , ... , WN−1 ) , test input X and the output ( prediction logits ) y . In between , there are mediator variables ( features ) which transmit the effect of interventions on the parameters of intermediate layers to the output . As we can see , there is no direct effect of parameters variables in intermediate layers ; there is only one direct effect on y which is the effect of WN−1 ( N is the logits layer ) . Also , there is no effect of input data X on parameters variables as they are independent in the case of post-hoc DNNs1 . The parameters variables , wi , and features ui form a collider at ui+1 in layer li+1 . This brings us to the following assumption which is the basic block of our proposed method . Assumption 1 A de-confounded ( robust ) explanation of a black-box DNN can be defined by the measure of changes between the effect distributions P ( y|do ( wki ) , X ) obtained by interventions on wki and the observed distribution ( outcome ) P ( y|X ) , for any i ∈ { 1 , ... , N } and k ∈ { 1 , ... , vi+1 } . It is easy to check the validity of Assumption 1 from the causal graph shown in Fig . 1 ( c ) . First , the graph allows us isolating the parameters of interest and reason on the effect of every parameter on model output . Second , model parameters are not confounded by the skip connections existing in complex models , like ResNet . This is not the case when considering the features space where some variables may have common effects on multiple variables . We show such case in Fig . 1 ( c ) where u1 is a confounder ( u2 ← u1 → y ) . Confounders require adjustment criterion ( Pearl ( 2009 ) ) to block any superiors correlations that give wrong effect estimates . This becomes more difficult in complex DNNs because of many confounders ( Chattopadhyay et al . ( 2019 ) ) . 2.3 FINDING CAUSAL MECHANISMS IN POST-HOC DNN . We are interested in finding the mechanisms ( causal pathways ) along which changes flow from causal variables to the effects . These mechanisms will form explanations of the DNN ’ s behaviour for every data input ( local explanations ) and across inputs ( global explanations ) . For simplicity , let us first consider the example in Fig 1 ( c ) , where we have a single parameter w in each layer . The causal model shows that the effect of a causal parameter wki , or filter for convolution layers , on model output ( y ) is mediated by the effect of changes on variables ( called mediators ) along the paths from layers li+1 , ... , lN−1 after intervening on wki . These changes will also transmit to the output variable in lN . Generally , each layer has multiple neurons and we may apply interventions on a set of selected parameters ( k ∈ I ) to analyze their combined effect on model ’ s output . 1The parameters update their values in the training phase using the training data inputs , which creates a causal path between data inputs and model parameters . The interventions do ( wki ) imply changing the values of w k i . In convolution layers , each w k i is a tiny set of pixels ( e.g. , 9 in case of 3 × 3 filter ) . Changing one single value would not carry out significant changes on the output . Therefore , we consider changing all the values of the filter . In fully connected layers , a parameter is a scalar value . The effect of changing one single parameter in the penultimate layer of the classifier would be highly significant to the output . To find the causal mechanisms , we start by quantifying the direct effect on the prediction variable yj ( for class j ) , which are in this case the parameters wjN−1 . We intervene on every single parameter and analyze its effect . Finding the direct effects enables identifying the causal paths in the network . However , for intermediate layers we can not do interventions on every single parameter . This is computationally very expensive and not practical in inference time . Moreover , deep networks are highly parametrized , which would lead to a high variance in cause-effect relations thus making difficult to capture robust explanations . We propose a robust selection criterion to select the variables that we want to intervene on . This leads to the following proposition . Proposition 1 ( Intervention variables in intermediate layers ) For wKi , the set of indices K ∈ { 1 , ... , vi+1 } of the mediators in intermediate layer li+1 and I ∈ { 1 , ... , vi } a subset of indices corresponding to the causal parameters wKI , i . Then there exist causal paths between uI , i and uK , i+1 and I will identify the intervention variables in li−1 We proof Proposition 1 on simple example in Appendix A . As we notice , we measure changes in retrospective way , so the selection criterion is defined for layer li−1 after finding the causal parameters in layer li . To find the indices I for layer li−1 , we focus on changes which are statistically significant using a threshold δi which is defined as δi = ( µỹ − µj ) /σỹ ( 1 ) where µj is the original prediction ( reference ) of the true class j , and µỹ is the average value of changed prediction and σỹ is the standard deviation . To capture the informativeness in causal parameters , we introduce in Proposition 2 the formula for the causal effect . Proposition 2 ( Causal effect ) Given an input X and let WKi be the set of causal variables ( or paths ) to the K neurons in li+1 , the causal effect ( CSi ) of do ( WKi ) is a measure of the information flow from WKi to y which is defined as : CSi ( n ) = Ey [ log p ( y|wKn , i = αn , X ) log p ( y|xs ) ] ( 2 ) where p ( y|X ) is the Softmax of the prediction logits of an input X , n = ( 0 , ... , vi − 1 ) , and αn are the change values for each single parameter . We put details of Proposition 2 in Appendix A . The interventions that we used in this work are αn = 0 . Propositions 1 and 2 uncover the causal mechanism of a hidden layer i in the DNN ’ s architecture . Algorithm 1 describes how we apply them to capture the causal mechanisms in all layers . | The authors aim to give post hoc explanations of Neural Network classifier decisions. They do so by finding causal relationships between model parameters and classifier outputs. To this end model parameters are set to zero and the change in the models prediction is calculated. If the change in prediction exceeds a threshold the parameter is deemed relevant and used for construction of an attribution map. | SP:92c0086763b9510afcb490beb863cbd8a5e550d3 |
Interventional Black-Box Explanations | 1 INTRODUCTION . The design of deep neural networks ( DNNs ) is built on complex structure of neurons , layers and operations ( e.g. , convolutions , non-linearity and back-propagation ) . These biologically-inspired designs are able to evolve by their own from training data . Their high dimensional parameter space allows learning meaningful semantics from large data distributions and perform well on many tasks . However , makes difficult to capture explanation of their behaviour . This is one of fundamental obstacles for using these models on critical systems . Most popular explanation methods focus on creating saliency maps from classification models to visualize important features ( Selvaraju et al . ( 2017b ) ; Binder et al . ( 2016 ) ; Simonyan et al . ( 2014 ) ) of a predicted class . However , saliency maps are not sufficient to reason on model behaviour and the explanations are not consistent between these methods . It is important to explain what are the mechanisms inside the hidden layers by which the model makes a prediction from an input . In this paper , we address this problem using causal inference . Understanding of cause-effect relations in the DNN architecture is one way to make such black box models transparent and to explain their behaviour when tested on new data . Structural causal models ( SCM ) ( Pearl ( 2009 ) ) and their causal diagrams use interventions in terms of the do-calculus to express these relations . We rely on this framework to explain black box DNNs . We are interested in explaining post-hoc models , i.e. , models after training . Recently , some work have addressed DNN explanations using causality ( Chattopadhyay et al . ( 2019 ) ; O ’ Shaughnessy et al . ( 2020 ) ; Narendra et al . ( 2018 ) ) . These methods concentrate on the data generation process and input-output relations in classification models . Their goal is to explain the effects of changing aspects in input data on model predictions . Our work is different because we focus on the model itself , more precisely , the pre-trained knowledge stored in its parameters . We consider deep convolution models which are widely applied to computer vision tasks and have some applications in speech recognition and natural language processing . A neural network architecture is a form of Directed Acyclic Graph ( DAG ) models , in which neurons ( nodes ) are connected by directed edges form one layer to the next . A causal diagram or SCM was used to summarize the complex structure of DNNs ( Chattopadhyay et al . ( 2019 ) ; Narendra et al . ( 2018 ) ) such that interventions can be applied to explain the effect of a variable of interest on model prediction . The construction of the causal diagram depends on the variables of interest that we would like to understand their effects . Some methods ( Chattopadhyay et al . ( 2019 ) ; O ’ Shaughnessy et al . ( 2020 ) ; Harradon et al . ( 2018 ) ) consider the latent features of a generative model as variables of interest . Narendra et al . ( 2018 ) focused on explaining the effect of the variance in convolution filters of a CNN . In this work , we provide a different view of the causal diagram ( see Fig.1 ) which allows to perform causal reasoning on the entire structure of the DNN . We summarize our contributions as follows . We build a causal graph of a post-hoc DNN . We develop an algorithm , termed interventional black box explanations , to find the causal mechanisms that explain the local and global DNN behaviour on individual samples and across samples , respectively . We show that our explanations can be used to correct or improve the probability of a prediction in test time . We consider , in this work , architectures for image classification . We capture explanations for LeNet and ResNet18 architectures using MNIST and ImageNet data inputs . We show that the explanations obtained by our method can be effectively used to remove noise in the model and improve its performance . Finally , like attribution-based methods , we provide visual explanations of classifier ’ s behaviour computing visual concepts ( semantics ) from the effect variables ( response of causal filters ) . We qualitatively demonstrate that our method captures more informative concepts strongly connected to model ’ s prediction and useful for human interpretability . 2 INTERVENTIONAL BLACK BOX EXPLANATION . We begin our method by paving the way to the construction of the proposed causal diagram for post-hoc DNN ( Section 2.1 ) . In Section 2.2 , we define the causal diagram followed describing of the proposed algorithm ( Section 2.3 ) . 2.1 FORMAL SETTING . We will use DNN and CNN interchangeably in this paper as we focus on classification networks comprising hidden convolution layers , but we keep in mind that our method is generic and can be applied on other architectures . In formal setting , a black-box CNN has two major components : a feature extraction module consisting of n convolution layers followed by a classifier with m fully connected layers . An input is an image X ∈ Rd1×d2×c0 , ( d1 , d2 are the spatial dimensions and c0 is the channels number ) . The subscribe number indicates that this is the input layer l0 . An output y ∈ RK ( K is the number of classes ) is a real-valued vector of predictions indicating the class logits of X . The feature extraction module can be a simple structure of sequential layers ( l1 , .... , ln ) where each layer li consists of ci convolution units ( we call them nodes ) and connected with all ci+1 units in layer li+1 . This structure can be more complex in state-of-the-art DNNs , such as ResNets , where skip connections are designed to jump over some layers ( see Fig 1 ( a ) ) . For simplicity , we assume non-linearity , batch normalization and pooling ( if they exist ) as parts of every node in layer li . A layer li in the classifier module consists of vi neurons , and every neuron is densely connected to all vi+1 neurons in layer li+1 . The neurons are naturally nodes in those densely connected layers . We will substitute ci by vi to unify the notations . Consequently , we summarize the graphical structure of the DNN by GM = ( { v0 , ... , vn , .... vN } ) , { E0,1 , ... , EN−1 , N } , with N = n + m and Ei , i+1 is edge vector connecting the nodes in layer li to li+1 as shown in Fig 1 ( b ) . In post-hoc DNNs , each path communicates the information from layer li to layer li+1 in a single direction starting from the input and passing forward to the output ( prediction ) . It is an Acyclic Directed Graph ( DAG ) . Note that this assumption is not valid on the network during training because of backpropagation . The pre-trained knowledge captured from training data is stored in the DNN ’ s parameters ( i.e. , weights ) . We define wki ∈ Rvi as the weight vector connected to the k-th node in layer li+1 . We show in the next section the motivation for this notation . 2.2 THE CAUSAL DIAGRAM OF POST-HOC DNNS . A causal diagram is a graphical model that summarizes an existing knowledge where the nodes represent the variables of interest and edges represent the causal relationships between variables ( Greenland & Pearl ( 2011 ) ) . A causal explanation consists of a causal diagram and symbolic queries defined by interventions , or do-calculus , to express cause-effect relations ( Pearl & Mackenzie ( 2018 ) ) . For post-hoc DNNs , knowledge is encapsulated in its weights during the training phase . For CNNs , the filters of convolution layers express the pieces of knowledge that construct the entangled feature space and concepts of an input data . There are too many parameters to explore , and the goal of the causal graph is to uncover the important ones that explain the behaviour of the model . To explain the effects of these parameters on predictions , we propose a causal diagram as the example shown in Fig . 1 ( c ) . In this graph , we distinguish between two types of variables : parameters and features ( network nodes ) . Our variables of interest are : parameters W = ( W0 , ... , WN−1 ) , test input X and the output ( prediction logits ) y . In between , there are mediator variables ( features ) which transmit the effect of interventions on the parameters of intermediate layers to the output . As we can see , there is no direct effect of parameters variables in intermediate layers ; there is only one direct effect on y which is the effect of WN−1 ( N is the logits layer ) . Also , there is no effect of input data X on parameters variables as they are independent in the case of post-hoc DNNs1 . The parameters variables , wi , and features ui form a collider at ui+1 in layer li+1 . This brings us to the following assumption which is the basic block of our proposed method . Assumption 1 A de-confounded ( robust ) explanation of a black-box DNN can be defined by the measure of changes between the effect distributions P ( y|do ( wki ) , X ) obtained by interventions on wki and the observed distribution ( outcome ) P ( y|X ) , for any i ∈ { 1 , ... , N } and k ∈ { 1 , ... , vi+1 } . It is easy to check the validity of Assumption 1 from the causal graph shown in Fig . 1 ( c ) . First , the graph allows us isolating the parameters of interest and reason on the effect of every parameter on model output . Second , model parameters are not confounded by the skip connections existing in complex models , like ResNet . This is not the case when considering the features space where some variables may have common effects on multiple variables . We show such case in Fig . 1 ( c ) where u1 is a confounder ( u2 ← u1 → y ) . Confounders require adjustment criterion ( Pearl ( 2009 ) ) to block any superiors correlations that give wrong effect estimates . This becomes more difficult in complex DNNs because of many confounders ( Chattopadhyay et al . ( 2019 ) ) . 2.3 FINDING CAUSAL MECHANISMS IN POST-HOC DNN . We are interested in finding the mechanisms ( causal pathways ) along which changes flow from causal variables to the effects . These mechanisms will form explanations of the DNN ’ s behaviour for every data input ( local explanations ) and across inputs ( global explanations ) . For simplicity , let us first consider the example in Fig 1 ( c ) , where we have a single parameter w in each layer . The causal model shows that the effect of a causal parameter wki , or filter for convolution layers , on model output ( y ) is mediated by the effect of changes on variables ( called mediators ) along the paths from layers li+1 , ... , lN−1 after intervening on wki . These changes will also transmit to the output variable in lN . Generally , each layer has multiple neurons and we may apply interventions on a set of selected parameters ( k ∈ I ) to analyze their combined effect on model ’ s output . 1The parameters update their values in the training phase using the training data inputs , which creates a causal path between data inputs and model parameters . The interventions do ( wki ) imply changing the values of w k i . In convolution layers , each w k i is a tiny set of pixels ( e.g. , 9 in case of 3 × 3 filter ) . Changing one single value would not carry out significant changes on the output . Therefore , we consider changing all the values of the filter . In fully connected layers , a parameter is a scalar value . The effect of changing one single parameter in the penultimate layer of the classifier would be highly significant to the output . To find the causal mechanisms , we start by quantifying the direct effect on the prediction variable yj ( for class j ) , which are in this case the parameters wjN−1 . We intervene on every single parameter and analyze its effect . Finding the direct effects enables identifying the causal paths in the network . However , for intermediate layers we can not do interventions on every single parameter . This is computationally very expensive and not practical in inference time . Moreover , deep networks are highly parametrized , which would lead to a high variance in cause-effect relations thus making difficult to capture robust explanations . We propose a robust selection criterion to select the variables that we want to intervene on . This leads to the following proposition . Proposition 1 ( Intervention variables in intermediate layers ) For wKi , the set of indices K ∈ { 1 , ... , vi+1 } of the mediators in intermediate layer li+1 and I ∈ { 1 , ... , vi } a subset of indices corresponding to the causal parameters wKI , i . Then there exist causal paths between uI , i and uK , i+1 and I will identify the intervention variables in li−1 We proof Proposition 1 on simple example in Appendix A . As we notice , we measure changes in retrospective way , so the selection criterion is defined for layer li−1 after finding the causal parameters in layer li . To find the indices I for layer li−1 , we focus on changes which are statistically significant using a threshold δi which is defined as δi = ( µỹ − µj ) /σỹ ( 1 ) where µj is the original prediction ( reference ) of the true class j , and µỹ is the average value of changed prediction and σỹ is the standard deviation . To capture the informativeness in causal parameters , we introduce in Proposition 2 the formula for the causal effect . Proposition 2 ( Causal effect ) Given an input X and let WKi be the set of causal variables ( or paths ) to the K neurons in li+1 , the causal effect ( CSi ) of do ( WKi ) is a measure of the information flow from WKi to y which is defined as : CSi ( n ) = Ey [ log p ( y|wKn , i = αn , X ) log p ( y|xs ) ] ( 2 ) where p ( y|X ) is the Softmax of the prediction logits of an input X , n = ( 0 , ... , vi − 1 ) , and αn are the change values for each single parameter . We put details of Proposition 2 in Appendix A . The interventions that we used in this work are αn = 0 . Propositions 1 and 2 uncover the causal mechanism of a hidden layer i in the DNN ’ s architecture . Algorithm 1 describes how we apply them to capture the causal mechanisms in all layers . | This paper proposed a causal-driven method aiming to resolve the black-box issue of DNNs. The proposed interventional black-box explanations method tends to be model-agnostic and can apply a variety of DNN models. In the experiments, the authors examined the proposed method on two well-known DNN architectures LeNet and ResNet18. | SP:92c0086763b9510afcb490beb863cbd8a5e550d3 |
Efficient Self-supervised Vision Transformers for Representation Learning | This paper investigates two techniques for developing efficient self-supervised vision transformers ( EsViT ) for visual representation learning . First , we show through a comprehensive empirical study that multi-stage architectures with sparse self-attentions can significantly reduce modeling complexity but with a cost of losing the ability to capture fine-grained correspondences between image regions . Second , we propose a new pre-training task of region matching which allows the model to capture fine-grained region dependencies and as a result significantly improves the quality of the learned vision representations . Our results show that combining the two techniques , EsViT achieves 81.3 % top-1 accuracy on the ImageNet linear probe evaluation , outperforming prior arts with around an order magnitude of higher throughput . When transferring to downstream linear classification tasks , EsViT outperforms its supervised counterpart on 17 out of 18 datasets . The code and models will be publicly available . 1 INTRODUCTION . Self-supervised learning ( SSL ) with Transformers ( Vaswani et al. , 2017 ) has become a de facto standard of model choice in natural language processing ( NLP ) . The dominant approaches such as GPT ( Radford et al. , 2018 ) and BERT ( Devlin et al. , 2019 ) are pre-training on a large text corpus and then fine-tuning to various smaller task-specific datasets , showing superior performance . Larger Transformers pre-trained with larger-scale language datasets often lead to a stronger generalization ability , demonstrated by improved performance in downsteam tasks ( with no sign of performance saturation yet ) , as exemplified in GPT-3 ( Brown et al. , 2020 ) . In computer vision ( CV ) , however , self-supervised visual representation learning is still dominated by convolutional neural networks ( CNNs ) . Sharing a similar goal/spirit with NLP , SSL in CV aims to learn general-purpose image features from raw pixels without relying on manual supervisions , and the learned networks are expected to serve as the backbone of various downstream tasks such as classification , detection and segmentation . Recently , impressive performance have been achieved by CNN-based SSL , outperforming state-of-the-art ( SoTA ) fully-supervised pre-training methods ( He et al. , 2020 ; Caron et al. , 2020 ) on tasks with a limited number of labels . The key to success is view-level learning : maximizing agreement of learned representations between differently augmented views of the same example . Recent works , including SimCLR-v2 ( Chen et al. , 2020d ) , BYOL ( Grill et al. , 2020 ) and SwAV ( Caron et al. , 2020 ) , have scaled up the CNN-based models to hundreds of millions of parameters . However , SSL has not enjoyed the same scaling success in CV as that in NLP . Several attempts have been made to close the gap by combining SSL with Transformer and selfattention architectures . Early works include Selfie ( Trinh et al. , 2019 ) , which generalizes the concept of masked language modeling of BERT for images . The idea has been recently revisited in Vision Transformer ( ViT ) ( Dosovitskiy et al. , 2021 ) via pre-training on a much larger scale dataset , e.g. , JFT300M . ImageGPT ( iGPT ) ( Chen et al. , 2020b ) generalizes the concept of auto-regressive language modeling of GPT for images , showing encouraging ImageNet recognition accuracy with a large model size . Contrastive learning with ViT has also been studied very recently in DINO ( Caron et al. , 2021 ) and MoCo-v3 ( Chen et al. , 2021 ) , where new SoTA result by linear probe evaluation on ImageNet-1K is achieved , by exhaustively consuming computation resource on full self-attention operators with long sequences of split image patches . Aiming to improve the efficiency of Transformer-based SSL , this paper presents Efficient selfsuperivsed Vision Transformers ( EsViT ) , by using a multi-stage architecture and a region-based pre-training task for self-supervised representation learning . Our main findings and contributions can be summarized as follows : ( 1 ) An intriguing property of self-supervised monolithic Transformers is firstly reported in our paper : automatic discovery of semantic correspondence between local regions . ( 2 ) We present the first comprehensive empirical study to show the pros and cons of multi-stage vision Transformer architectures for SSL . Though greatly reducing compute complexity , we find that the multi-stage architecture causes the loss of the property in ( 1 ) . ( 3 ) A region matching pre-train task is proposed to alleviate the issue in ( 2 ) , and further improve the learned representations and attentions . ( 4 ) We validate the new EsViT , which combines the two techniques , on a range of tasks . It significantly reduces the cost in building SoTA SSL vision systems , as summarized in Figure 1 , and shows better scaling performance on accuracy vs. throughput and model size . Under the linear evaluation protocol , EsViT achieves 81.3 % top-1 accuracy , showing the best performance compared with all systems , and is 3.5× parameter-efficient and has at least 10× higher throughput than previous SoTA ( 81.0 % , MoCo-v3 with ViT-BN-L/7 ( Chen et al. , 2021 ) ) . Compared with its supervised counterpart Swin Transformers ( Liu et al. , 2021 ) , EsViT shows superior performance on 17 out 18 datasets , when transferring the learned representations to downstream linear classification tasks . 2 METHODS . Transformer-based SSL methods emerge very recently to lead the state-of-the-art performance on the ImageNet linear probe task ( Chen et al. , 2021 ; Caron et al. , 2021 ) . It inherits the successes from ( 1 ) monolithic Transformer architectures that dominate in NLP ( Devlin et al. , 2019 ; Radford et al. , 2018 ) , and ( 2 ) instance-level contrastive learning objectives that demonstrate arguably the best SSL performance in computer vision ( Chen et al. , 2020c ) . Though simple and effective , the existing Transformer-based SSL methods require a large amount of compute resources ( e.g. , > 1.7 TPU years of training ) to reach SoTA performance . We believe that the SSL system efficiency is highly related to two ingredients : the network architecture and the pre-train task . To strike for a better tradeoff between accuracy and efficiency , we present EsViT , showing better synergy of networks ( a multi-stage Transformer architecture ) and pre-train tasks ( a non-contrastive region-matching task ) . 2.1 NETWORK ARCHITECTURES : FROM MONOLITHIC TO MULTI-STAGE VIT BACKBONE . Multi-stage ViT . This paper presents the first empirical study of multi-stage Transformer architectures ( Vaswani et al. , 2021 ; Wang et al. , 2021 ; Liu et al. , 2021 ; Zhang et al. , 2021 ; Wu et al. , 2021 ) for SSL . Each stage consists of a patch merging/embedding module , and a Transformer with sparse self-attention module . ( i ) The patch merging module plays a slightly different roles in different stages . In the first stage , it splits an input RGB image into non-overlapping patches . Each patch is treated as a “ token ” , constructed as a concatenation of the raw pixel RGB values , which is further projected into a C-dimension feature . In the later stage , the patch merging module concatenates the features of each group of 2× 2 neighboring patches , and applies a linear layer on the 4C-dimensional concatenated features . This reduces the number of tokens by a multiple of 2× 2 = 4 , and the output dimension is set to 2C . ( ii ) A Transformer with sparse self-attention module are then employed to enable interactions among the merged features . The two modules above are repeated for multiple times , typically 4 times , resulting in a multi-stage ViT . As a result , a hierarchical representation is generated : the number of tokens is reduced and the feature dimension ( and the number of heads in self-attentions ) of each token is increased , as the network gets deeper . An overview comparison of the monolithic and multi-stage Transformer architectures for SSL is illustrated in Figure 7 in Appendix . An intriguing property of self-supervised monolithic ViT . Though straightforward in implementation , changing from monolithic to multi-stage architecture without careful treatments may lose some desirable properties of self-supervised Transformers In out study , we first empirically note an intriguing property of self-supervised monolithic ViT ( Caron et al. , 2021 ) : the pre-trained model exhibits a very strong ability to automatically discovers correspondences , even without a region-level matching objective specified in training . We quantitatively evaluate the correspondence learning to illustrate this property , as discussed in the following process . ( i ) Simulated benchmark . Based on 50K images in the ImageNet validation dataset , we create a simple evaluation benchmark with mild augmentations : For a center-crop image , we apply HorizontalFlip , then ColorJitter and RandomGrayscale to create a new augmented view . In this way , ground-truth correspondences are created . ( ii ) Evaluation process . Given two views of the same image , we use the pre-trained backbone to extract the top-layer features , and find the feature vector in one view that best matches the other in terms of highest cosine similarity . The accuracy is measured as the averaged percentage of correctly identifying the region-to-region correspondences . Please see details in Section C.7 in Appendix . ( iii ) Results . We quantitatively show that a self-supervised monolithic ViT yields 95 % accuracy . However , simply replacing the network with a multi-stage Transformer yields only 66 % accuracy . This significant degradation ( absolute 29 % accuracy drop ) reveals the loss of the correspondence learning property . We first raise this critical problem , and believe that it has a large impact on the pre-trained model ’ s performance in various downstream tasks . 2.2 PRE-TRAINING TASKS : DELVING INTO VIEWS WITH REGIONS . We employ a non-contrastive learning framework to build our SSL method . Specifically , Selfdistillation with no labels ( DINO ) ( Caron et al. , 2021 ) is considered . It leverages the knowledge distillation learning paradigm where a student network gθs is trained to match the output of a given teacher network gθt , parameterized by θs and θt respectively . The neural network g is composed of a backbone f ( e.g. , Transformers or ConvNets ) , and of a projection head h : g = h ◦ f . The features used in downstream tasks are the output of backbone f . In SSL , different augmented views x̃ of an image x are fed into backbone network to obtain feature maps z = f ( x̃ ) . Two MLP heads followed by softmax per network further convert the feature vectors z ∈ z into probability vectors p = h ( z ) ; one head for view-level and the other head for region-level , respectively . More precisely , from a given image , we generate a set V of different views1 following ( Caron et al. , 2021 ) . The resulting feature map at the top layer for each view is z = [ z1 , . . . , zT ] , where T is the sequence length , and zi is a region-level representation for the local patch at position i . Average pooling is applied to obtain the view-level representation z̄ = avg-pool ( z ) . View-level task Given the augmented view set for student V and teacher V∗ , a set of pairs P = { ( s , t ) |x̃s ∈ V , x̃t ∈ V∗ and s 6= t } is constructed to perform cross-view prediction tasks . We consider the pre-training task at the view level proposed by ( Caron et al. , 2021 ) : LV = 1 |P| ∑ ( s , t ) ∈P MV ( s , t ) , with MV ( s , t ) = −ps log pt , ( 1 ) where ps = h ( z̄s ) and pt = h ( z̄t ) are the probability output of an MLP head h over the view-level representations z̄s and z̄t , learned by student and teacher , respectively . In DINO , ViT/DeiT are considered , hence the view-level representation is the feature of the [ CLS ] token . 1This set often contains views of two different resolutions V = [ Vg , Vl ] , where Vg = { x̃gi |i = 1 , 2 } is a global-view set of higher resolution , and Vl = { x̃li |i = 1 , . . . , 8 } is a local-view set of lower resolution . All views V are passed through the student while only the global views Vg are passed through the teacher . Region-level task In ( Caron et al. , 2021 ) , the LV encourages “ local-to-global ” correspondences only at a coarse level : the large crop and the small crop are matched in the view level , leaving region-to-region correspondence unspecified . In monolithic Transformers , the drop paths and skip connections from low-level features to high-level features help the the latter to remain discriminative , thus maintain good region-matching performance . However , such a property gets diluted due to the merging operators in multi-stage Transformers . As shown in our experiments later , training a multi-stage network with LV only indeed results in sub-optimal representations . Further , it could be a waste of computation not to leverage region-level features z that are computed in the process of extracting view-level feature . Inspired by the success of masked language modeling task in BERT , we argue that it is important to have region-level pre-training task for computer vision , so that the model can ( 1 ) amortize the computation and fully leverage the extracted region-level features , and ( 2 ) take into account the co-occurrences/structures between local features . Unfortunately , directly performing masked patch prediction ( MPP ) for the multi-stage Transformer architecture is infeasible , as the one-to-one correspondences between the input visual tokens and output features get diluted due to the merging operation . Even for monolithic architectures , MPP has not been proved effective in computer vision , as empirically shown in ( Dosovitskiy et al. , 2021 ) . To address this problem , we propose a non-contrastive , region-matching method that directly works at the level of local features by taking into account their correspondences : LR = 1 |P| ∑ ( s , t ) ∈P MR ( s , t ) , with MR ( s , t ) = − 1 T T∑ i=1 pj∗ log pi , j ∗ = arg max j zTi zj ‖zi‖‖zj‖ , ( 2 ) where pi = h′ ( zi ) and pj = h′ ( zj ) are the probability outputs of a new MLP head h′ over the local features of student zi ∈ zs and teacher zj ∈ zt , respectively . j∗ is the index of the feature in zt that best matches the i-th feature in zs , in the sense of highest cosine similarity . Note that zi and zj∗ are contextualized features of two best matched regions from different augmentated views , minimizing LR encourages different contexts ( i.e. , surrounding regions ) to learn invariant features , and thus captures the region-dependency . The overall pre-training objective of EsViT is L = LR + LV , we learn to match the feature distributions at both the view and region levels by minimizing the cross-entropy loss w.r.t . the parameters of the student network gθs . A visual illustration is in Figure 2 , and the full algorithm is in Appendix . We updates teacher/student network alternatively : ( i ) Given a fixed teacher network , the student network is updated by minimizing the full cross-entropy loss : θs ← arg minθs L ( s , t ; θs ) . ( ii ) The teacher model is updated as an exponential mov- ing average ( EMA ) of the student weights θt ← λθt + ( 1−λ ) θs , with λ following a cosine schedule from 0.996 to 1 during training . By default , the full objective L is used from the beginning . One can also load a checkpoint trained by LV only , and add LR for continual pre-training , which is shown effective in boosting performance in our experiments . Computational overhead Note that applying LR on the traditional monolithic Transformer architecture can be prohibitively computationally expensive , as it requires O ( T 2 ) to compute LR . For a typical image of resolution 224×224 , the feature map length of ViT/DeiT ( with patch size 16 ) at the top layer is T = 196 , while the multi-stage architecture yields T = 49 , which requires 3 times less compute in computing LR . To empirically illustrate this , we show in Appendix Section C.2 that LR adds acceptable extra memory and computational cost ( around 1.2 and 1.05 × , respectively ) for multi-stage Transformers , while it will quickly go out-of-memory for monolithic Transformers when the batch size is increased . | The paper investigates how to use self-supervised learning for multi-stage visual transformer models. Previous works have shown that SSL can learn image correspondences and lead to performant pre-trained models, while the multi-stage models can reduce the computation cost dramatically. This work tries to merge these two trends together. The solution is a new region-based loss that can be applied to the local features. The comprehensive experiments show the advantages of the resulting models on multiple tasks. | SP:13071dbb937ba7f7c7cbeade305f6d59635dabdf |
Efficient Self-supervised Vision Transformers for Representation Learning | This paper investigates two techniques for developing efficient self-supervised vision transformers ( EsViT ) for visual representation learning . First , we show through a comprehensive empirical study that multi-stage architectures with sparse self-attentions can significantly reduce modeling complexity but with a cost of losing the ability to capture fine-grained correspondences between image regions . Second , we propose a new pre-training task of region matching which allows the model to capture fine-grained region dependencies and as a result significantly improves the quality of the learned vision representations . Our results show that combining the two techniques , EsViT achieves 81.3 % top-1 accuracy on the ImageNet linear probe evaluation , outperforming prior arts with around an order magnitude of higher throughput . When transferring to downstream linear classification tasks , EsViT outperforms its supervised counterpart on 17 out of 18 datasets . The code and models will be publicly available . 1 INTRODUCTION . Self-supervised learning ( SSL ) with Transformers ( Vaswani et al. , 2017 ) has become a de facto standard of model choice in natural language processing ( NLP ) . The dominant approaches such as GPT ( Radford et al. , 2018 ) and BERT ( Devlin et al. , 2019 ) are pre-training on a large text corpus and then fine-tuning to various smaller task-specific datasets , showing superior performance . Larger Transformers pre-trained with larger-scale language datasets often lead to a stronger generalization ability , demonstrated by improved performance in downsteam tasks ( with no sign of performance saturation yet ) , as exemplified in GPT-3 ( Brown et al. , 2020 ) . In computer vision ( CV ) , however , self-supervised visual representation learning is still dominated by convolutional neural networks ( CNNs ) . Sharing a similar goal/spirit with NLP , SSL in CV aims to learn general-purpose image features from raw pixels without relying on manual supervisions , and the learned networks are expected to serve as the backbone of various downstream tasks such as classification , detection and segmentation . Recently , impressive performance have been achieved by CNN-based SSL , outperforming state-of-the-art ( SoTA ) fully-supervised pre-training methods ( He et al. , 2020 ; Caron et al. , 2020 ) on tasks with a limited number of labels . The key to success is view-level learning : maximizing agreement of learned representations between differently augmented views of the same example . Recent works , including SimCLR-v2 ( Chen et al. , 2020d ) , BYOL ( Grill et al. , 2020 ) and SwAV ( Caron et al. , 2020 ) , have scaled up the CNN-based models to hundreds of millions of parameters . However , SSL has not enjoyed the same scaling success in CV as that in NLP . Several attempts have been made to close the gap by combining SSL with Transformer and selfattention architectures . Early works include Selfie ( Trinh et al. , 2019 ) , which generalizes the concept of masked language modeling of BERT for images . The idea has been recently revisited in Vision Transformer ( ViT ) ( Dosovitskiy et al. , 2021 ) via pre-training on a much larger scale dataset , e.g. , JFT300M . ImageGPT ( iGPT ) ( Chen et al. , 2020b ) generalizes the concept of auto-regressive language modeling of GPT for images , showing encouraging ImageNet recognition accuracy with a large model size . Contrastive learning with ViT has also been studied very recently in DINO ( Caron et al. , 2021 ) and MoCo-v3 ( Chen et al. , 2021 ) , where new SoTA result by linear probe evaluation on ImageNet-1K is achieved , by exhaustively consuming computation resource on full self-attention operators with long sequences of split image patches . Aiming to improve the efficiency of Transformer-based SSL , this paper presents Efficient selfsuperivsed Vision Transformers ( EsViT ) , by using a multi-stage architecture and a region-based pre-training task for self-supervised representation learning . Our main findings and contributions can be summarized as follows : ( 1 ) An intriguing property of self-supervised monolithic Transformers is firstly reported in our paper : automatic discovery of semantic correspondence between local regions . ( 2 ) We present the first comprehensive empirical study to show the pros and cons of multi-stage vision Transformer architectures for SSL . Though greatly reducing compute complexity , we find that the multi-stage architecture causes the loss of the property in ( 1 ) . ( 3 ) A region matching pre-train task is proposed to alleviate the issue in ( 2 ) , and further improve the learned representations and attentions . ( 4 ) We validate the new EsViT , which combines the two techniques , on a range of tasks . It significantly reduces the cost in building SoTA SSL vision systems , as summarized in Figure 1 , and shows better scaling performance on accuracy vs. throughput and model size . Under the linear evaluation protocol , EsViT achieves 81.3 % top-1 accuracy , showing the best performance compared with all systems , and is 3.5× parameter-efficient and has at least 10× higher throughput than previous SoTA ( 81.0 % , MoCo-v3 with ViT-BN-L/7 ( Chen et al. , 2021 ) ) . Compared with its supervised counterpart Swin Transformers ( Liu et al. , 2021 ) , EsViT shows superior performance on 17 out 18 datasets , when transferring the learned representations to downstream linear classification tasks . 2 METHODS . Transformer-based SSL methods emerge very recently to lead the state-of-the-art performance on the ImageNet linear probe task ( Chen et al. , 2021 ; Caron et al. , 2021 ) . It inherits the successes from ( 1 ) monolithic Transformer architectures that dominate in NLP ( Devlin et al. , 2019 ; Radford et al. , 2018 ) , and ( 2 ) instance-level contrastive learning objectives that demonstrate arguably the best SSL performance in computer vision ( Chen et al. , 2020c ) . Though simple and effective , the existing Transformer-based SSL methods require a large amount of compute resources ( e.g. , > 1.7 TPU years of training ) to reach SoTA performance . We believe that the SSL system efficiency is highly related to two ingredients : the network architecture and the pre-train task . To strike for a better tradeoff between accuracy and efficiency , we present EsViT , showing better synergy of networks ( a multi-stage Transformer architecture ) and pre-train tasks ( a non-contrastive region-matching task ) . 2.1 NETWORK ARCHITECTURES : FROM MONOLITHIC TO MULTI-STAGE VIT BACKBONE . Multi-stage ViT . This paper presents the first empirical study of multi-stage Transformer architectures ( Vaswani et al. , 2021 ; Wang et al. , 2021 ; Liu et al. , 2021 ; Zhang et al. , 2021 ; Wu et al. , 2021 ) for SSL . Each stage consists of a patch merging/embedding module , and a Transformer with sparse self-attention module . ( i ) The patch merging module plays a slightly different roles in different stages . In the first stage , it splits an input RGB image into non-overlapping patches . Each patch is treated as a “ token ” , constructed as a concatenation of the raw pixel RGB values , which is further projected into a C-dimension feature . In the later stage , the patch merging module concatenates the features of each group of 2× 2 neighboring patches , and applies a linear layer on the 4C-dimensional concatenated features . This reduces the number of tokens by a multiple of 2× 2 = 4 , and the output dimension is set to 2C . ( ii ) A Transformer with sparse self-attention module are then employed to enable interactions among the merged features . The two modules above are repeated for multiple times , typically 4 times , resulting in a multi-stage ViT . As a result , a hierarchical representation is generated : the number of tokens is reduced and the feature dimension ( and the number of heads in self-attentions ) of each token is increased , as the network gets deeper . An overview comparison of the monolithic and multi-stage Transformer architectures for SSL is illustrated in Figure 7 in Appendix . An intriguing property of self-supervised monolithic ViT . Though straightforward in implementation , changing from monolithic to multi-stage architecture without careful treatments may lose some desirable properties of self-supervised Transformers In out study , we first empirically note an intriguing property of self-supervised monolithic ViT ( Caron et al. , 2021 ) : the pre-trained model exhibits a very strong ability to automatically discovers correspondences , even without a region-level matching objective specified in training . We quantitatively evaluate the correspondence learning to illustrate this property , as discussed in the following process . ( i ) Simulated benchmark . Based on 50K images in the ImageNet validation dataset , we create a simple evaluation benchmark with mild augmentations : For a center-crop image , we apply HorizontalFlip , then ColorJitter and RandomGrayscale to create a new augmented view . In this way , ground-truth correspondences are created . ( ii ) Evaluation process . Given two views of the same image , we use the pre-trained backbone to extract the top-layer features , and find the feature vector in one view that best matches the other in terms of highest cosine similarity . The accuracy is measured as the averaged percentage of correctly identifying the region-to-region correspondences . Please see details in Section C.7 in Appendix . ( iii ) Results . We quantitatively show that a self-supervised monolithic ViT yields 95 % accuracy . However , simply replacing the network with a multi-stage Transformer yields only 66 % accuracy . This significant degradation ( absolute 29 % accuracy drop ) reveals the loss of the correspondence learning property . We first raise this critical problem , and believe that it has a large impact on the pre-trained model ’ s performance in various downstream tasks . 2.2 PRE-TRAINING TASKS : DELVING INTO VIEWS WITH REGIONS . We employ a non-contrastive learning framework to build our SSL method . Specifically , Selfdistillation with no labels ( DINO ) ( Caron et al. , 2021 ) is considered . It leverages the knowledge distillation learning paradigm where a student network gθs is trained to match the output of a given teacher network gθt , parameterized by θs and θt respectively . The neural network g is composed of a backbone f ( e.g. , Transformers or ConvNets ) , and of a projection head h : g = h ◦ f . The features used in downstream tasks are the output of backbone f . In SSL , different augmented views x̃ of an image x are fed into backbone network to obtain feature maps z = f ( x̃ ) . Two MLP heads followed by softmax per network further convert the feature vectors z ∈ z into probability vectors p = h ( z ) ; one head for view-level and the other head for region-level , respectively . More precisely , from a given image , we generate a set V of different views1 following ( Caron et al. , 2021 ) . The resulting feature map at the top layer for each view is z = [ z1 , . . . , zT ] , where T is the sequence length , and zi is a region-level representation for the local patch at position i . Average pooling is applied to obtain the view-level representation z̄ = avg-pool ( z ) . View-level task Given the augmented view set for student V and teacher V∗ , a set of pairs P = { ( s , t ) |x̃s ∈ V , x̃t ∈ V∗ and s 6= t } is constructed to perform cross-view prediction tasks . We consider the pre-training task at the view level proposed by ( Caron et al. , 2021 ) : LV = 1 |P| ∑ ( s , t ) ∈P MV ( s , t ) , with MV ( s , t ) = −ps log pt , ( 1 ) where ps = h ( z̄s ) and pt = h ( z̄t ) are the probability output of an MLP head h over the view-level representations z̄s and z̄t , learned by student and teacher , respectively . In DINO , ViT/DeiT are considered , hence the view-level representation is the feature of the [ CLS ] token . 1This set often contains views of two different resolutions V = [ Vg , Vl ] , where Vg = { x̃gi |i = 1 , 2 } is a global-view set of higher resolution , and Vl = { x̃li |i = 1 , . . . , 8 } is a local-view set of lower resolution . All views V are passed through the student while only the global views Vg are passed through the teacher . Region-level task In ( Caron et al. , 2021 ) , the LV encourages “ local-to-global ” correspondences only at a coarse level : the large crop and the small crop are matched in the view level , leaving region-to-region correspondence unspecified . In monolithic Transformers , the drop paths and skip connections from low-level features to high-level features help the the latter to remain discriminative , thus maintain good region-matching performance . However , such a property gets diluted due to the merging operators in multi-stage Transformers . As shown in our experiments later , training a multi-stage network with LV only indeed results in sub-optimal representations . Further , it could be a waste of computation not to leverage region-level features z that are computed in the process of extracting view-level feature . Inspired by the success of masked language modeling task in BERT , we argue that it is important to have region-level pre-training task for computer vision , so that the model can ( 1 ) amortize the computation and fully leverage the extracted region-level features , and ( 2 ) take into account the co-occurrences/structures between local features . Unfortunately , directly performing masked patch prediction ( MPP ) for the multi-stage Transformer architecture is infeasible , as the one-to-one correspondences between the input visual tokens and output features get diluted due to the merging operation . Even for monolithic architectures , MPP has not been proved effective in computer vision , as empirically shown in ( Dosovitskiy et al. , 2021 ) . To address this problem , we propose a non-contrastive , region-matching method that directly works at the level of local features by taking into account their correspondences : LR = 1 |P| ∑ ( s , t ) ∈P MR ( s , t ) , with MR ( s , t ) = − 1 T T∑ i=1 pj∗ log pi , j ∗ = arg max j zTi zj ‖zi‖‖zj‖ , ( 2 ) where pi = h′ ( zi ) and pj = h′ ( zj ) are the probability outputs of a new MLP head h′ over the local features of student zi ∈ zs and teacher zj ∈ zt , respectively . j∗ is the index of the feature in zt that best matches the i-th feature in zs , in the sense of highest cosine similarity . Note that zi and zj∗ are contextualized features of two best matched regions from different augmentated views , minimizing LR encourages different contexts ( i.e. , surrounding regions ) to learn invariant features , and thus captures the region-dependency . The overall pre-training objective of EsViT is L = LR + LV , we learn to match the feature distributions at both the view and region levels by minimizing the cross-entropy loss w.r.t . the parameters of the student network gθs . A visual illustration is in Figure 2 , and the full algorithm is in Appendix . We updates teacher/student network alternatively : ( i ) Given a fixed teacher network , the student network is updated by minimizing the full cross-entropy loss : θs ← arg minθs L ( s , t ; θs ) . ( ii ) The teacher model is updated as an exponential mov- ing average ( EMA ) of the student weights θt ← λθt + ( 1−λ ) θs , with λ following a cosine schedule from 0.996 to 1 during training . By default , the full objective L is used from the beginning . One can also load a checkpoint trained by LV only , and add LR for continual pre-training , which is shown effective in boosting performance in our experiments . Computational overhead Note that applying LR on the traditional monolithic Transformer architecture can be prohibitively computationally expensive , as it requires O ( T 2 ) to compute LR . For a typical image of resolution 224×224 , the feature map length of ViT/DeiT ( with patch size 16 ) at the top layer is T = 196 , while the multi-stage architecture yields T = 49 , which requires 3 times less compute in computing LR . To empirically illustrate this , we show in Appendix Section C.2 that LR adds acceptable extra memory and computational cost ( around 1.2 and 1.05 × , respectively ) for multi-stage Transformers , while it will quickly go out-of-memory for monolithic Transformers when the batch size is increased . | This paper develops an efficient self-supervised vision transformer for learning visual representations. It introduces a multi-stage architecture with sparse attentions to reduce computation complexity and proposes a new pretraining task of region matching to capture fine-grained region dependencies. The results on the ImageNet and 18 small datasets or downstream tasks are good and compared with other state-of-the-art approaches. | SP:13071dbb937ba7f7c7cbeade305f6d59635dabdf |
DAIR: Disentangled Attention Intrinsic Regularization for Safe and Efficient Bimanual Manipulation | 1 INTRODUCTION . Consider the bimanual robot manipulation tasks such as rearranging multiple objects to their target locations in Figure 1 ( a ) . This complex and compositional task is very challenging as the agents will first need to reduce it to several sub-tasks ( pushing or grasping each object ) , and then the two agents will need to figure out how to allocate each sub-task to each other ( which object each robot should operate on ) for better collaboration . Importantly , two robots should avoid collision in a narrow space for safety concerns . While training a single RL agent that can solve such compositional tasks has caught research attention recently ( Chang et al. , 2019 ; Peng et al. , 2019 ; Devin et al. , 2019 ; Jiang et al. , 2019 ; Li et al. , 2021 ; 2020 ) , there are still two main challenges that are barely touched when it comes to tackle bimanual manipulation : ( i ) domination , i.e. , one robot may tend to solve all the sub-tasks while the other robot remains idle , which hurts the task solving efficiency ; ( ii ) conflict , i.e. , two robots may try to solve the same sub-task simultaneously , which result in unsafe conflicts and interruptions on shared workspace . One possible solution is to design a task-allocation reward function to encourage better coordination . However , it is particularly non-trivial and often sub-optimal to manually design such a reward function for complex problems that contain a large continuous sub-task space , such as the rearrangement task in Figure 1 ( a ) . Moreover , even with the reward function described above in hand , it remains unclear how to reduce collisions , particularly for the tasks that require two robots to act simultaneously and safely . For example , in the task shown in Figure 1 ( d ) , one robot needs to push the green door to make space for the other robot to move the blue box to the goal position . However , these two robots can easily interrupt and collide with each other when they perform these coordination actions . We consider an alternative setting using sparse rewards without explicitly assigning sub-tasks to the robots . However , this leads to another challenge : How to encourage the robots to explore collaborative and safe behaviors with limited positive feedbacks ? For bimanual manipulation , an intrinsic motivation is introduced by Chitnis et al . ( 2020b ) , leveraging the difference between the actual effect of an action ( taken by two robots ) and the composition of individual predicted effect from each agent using a forward model . While this intrinsic reward encourages the two robots to collaborate for tasks that are hard to achieve by a single robot , it does not address the domination and conflict problems for efficient and safe manipulation . In this paper , we present DAIR : Disentangled Attention Intrinsic Regularization which encourages the two robots to safely and efficiently collaborate on different sub-tasks during bimanual manipulation . Instead of designing a new intrinsic reward function , we introduce a simple regularization term for representation learning , which encourages the robots to attend to different interaction regions . Specifically , we adopt the attention mechanism ( Vaswani et al. , 2017 ) in both our policy and value networks , where we compute the dot-product between each robot representation and the object interaction region representations to obtain a probability distribution . Each robot has its own probability distribution to represent which interaction region it is focusing on . We define our intrinsic regularization as minimizing the dot product between the two probability distributions between two robots ( i.e. , to be orthogonal ) in each time step . By adding this loss function , different robots will be regularized to attend to different interaction points within their policy representation . This forces the policies to tackle sub-tasks over disjoint working space without interfering with each other . We remark that disentangled attention can be generalized to environments with multiple agents . In our experiments , we focus on five diverse manipulation tasks in simulation environments with two Fetch robots as shown in Figure 1 . These tasks not only require the robots to manipulate multiple objects ( more than two , up to eight ) with each object offering one interaction region , but also a single heavy object with multiple interaction regions ( Figure 1 ( e ) ) . In our experiments , we show that our approach not only improves performance and sample efficiency in learning , but also helps avoiding the domination problem and largely reducing the conflicts for safe coordination between two robots . Moreover , the learned policies can also solve the task in fewer steps , which is the significance of bimanual cooperation compared to single-arm manipulation . We highlight our main contributions as : • Observation for two important problems ( domination and conflict ) in training RL agents for safe bimanual manipulation , and a new robotics task set with one to eight objects . • We propose DAIR , a novel and general intrinsic regularization . It not only improves the success rate in bimanual manipulation , solves the tasks more efficiently , but also reduces the conflicts between robots . This allows the robots to collaborate and coordinate more safely . 2 RELATED WORK . Intrinsic motivation in reinforcement learning . To train RL agents with sparse rewards , Schmidhuber ( 1991 ) first proposed to motivate the agent to reach state space giving a large model prediction error , which indicates the state is currently unexplored and unseen by the model . Such a mechanism is also called intrinsic motivation , which provides a reward for agents to explore what makes it curious ( Oudeyer et al. , 2007 ; Barto , 2013 ; Bellemare et al. , 2016 ; Ostrovski et al. , 2017 ; Huang et al. , 2019 ) . Recently , it has also been shown that the agents can explore with such an intrinsic motivation without extrinsic rewards ( Pathak et al. , 2017 ; Burda et al. , 2018 ; 2019 ) . Besides using prediction error , diverse skills can also be discovered by maximizing the mutual information between skills and states as the intrinsic motivation ( Eysenbach et al. , 2019 ; Sharma et al. , 2020 ) . While these approaches have achieved encouraging results in single agent cases , they are not directly applicable to environments with multiple agents . In our paper , we propose a novel intrinsic regularization for helping two robots work actively on different sub-tasks . Multi-agent collaboration . Cooperative multi-agent reinforcement learning has exhibited progress over the recent years ( Foerster et al. , 2016 ; He et al. , 2016 ; Peng et al. , 2017 ; Lowe et al. , 2017 ; Foerster et al. , 2018 ; Sunehag et al. , 2018 ; Rashid et al. , 2018 ; Son et al. , 2019 ; Wang et al. , 2020a ) . For example , Lowe et al . ( 2017 ) proposed to extend the DDPG ( Lillicrap et al. , 2016 ) algorithm to the multi-agent setting with decentralized policies and centralized Q functions , which implicitly encourages the agents to cooperate . However , the problem of exploration still remains as a bottleneck , and in fact even more severe in multi-agent RL . Motivated by the previous success on a single agent , intrinsic motivation is also introduced to help multiple agents explore and collaborate ( Foerster et al. , 2016 ; Strouse et al. , 2018 ; Hughes et al. , 2018 ; Iqbal & Sha , 2019b ; Jaques et al. , 2019 ; Wang et al. , 2020b ) . For example , Jaques et al . ( 2019 ) proposed to use social motivation to provide intrinsic rewards which model the influence of one agent on another agent ’ s decision making . The work that is most related to ours is by Chitnis et al . ( 2020b ) on the intrinsic motivation for synergistic behaviors , which encourages the robots to collaborate for a task that is hard to solve by a single robot . As this paper has not focused on the domination and conflict problems , our work on disentangled attention is a complementary technique to the previous work . Bimanual manipulation . The field of bimanual manipulation has been long studied as a problem involving both hardware design and control ( Raibert & Craig , 1981 ; Hsu , 1993 ; Xi et al. , 1996 ; Smith et al. , 2012 ) . In recent years , researchers applied learning based approach to bimanual manipulation using imitation learning from demonstrations ( Zollner et al . ; Gribovskaya & Billard , 2008 ; Tung et al. , 2020 ; Xie et al. , 2020 ) and reinforcement learning ( Kroemer et al. , 2015 ; Amadio et al. , 2019 ; Chitnis et al. , 2020a ; b ; Ha et al. , 2020 ) . For example , Amadio et al . ( 2019 ) proposed to leverage probabilistic movement primitives from human demonstrations . Chitnis et al . ( 2020a ) further introduced a highlevel planning policy to combine a set of parameterized primitives to solve complex manipulation tasks . In contrast to these works , our approach does not assume access to pre-defined primitives . Both robots will learn how to perform each sub-task and how to collaborate without conflicts in an end-to-end manner , which makes the approach more general . Attention mechanism . Our intrinsic motivation is built upon the attention mechanism which has been widely applied in natural language processing ( Vaswani et al. , 2017 ) and computer vision ( Wang et al. , 2018 ; Dosovitskiy et al. , 2021 ) . Recently , the attention mechanism is also utilized in multi-agent RL to model the communication and collaboration between agents ( Zambaldi et al. , 2018 ; Jiang & Lu , 2018 ; Malysheva et al. , 2018 ; Iqbal & Sha , 2019a ; Long et al. , 2020 ) . For example , Long et al . ( 2020 ) proposed to utilize attention to flexibly increase the number of agents and perform curriculum learning for large-scale multi-agent interactions . Li et al . ( 2020 ) adopt the attention mechanism to generalize multi-object stacking with a single arm . In our paper , instead of simply using attention for interaction among hand and a variable number of objects , we propose DAIR to encourage the agents to attend on different sub-tasks for better collaboration . 3 PRELIMINARIES . We consider a multi-agent Markov decision process ( MDP ) Littman ( 1994 ) with N agents , which can be represented by ( S , A , P , R , H , γ ) . The state s ∈ S and the action ai ∈ A for agent i are continuous . P ( st+1|st , at1 , ... , atN ) represents the stochastic transition dynamics . Ri ( st , ati ) represents the reward function for agent i. H is the horizon and γ is the discount factor . The policy πθi ( a t|st ) for agent i is parameterized by θi . The goal is to learn multi-agent policies maximizing the return . In this paper , we tackle a two-agent collaboration problem ( N = 2 ) , but our method can generalize to more agents . 3.1 REINFORCEMENT LEARNING WITH SOFT ACTOR-CRITIC . We adopt the Soft Actor-Critic ( SAC ) Haarnoja et al . ( 2018 ) for reinforcement learning ( RL ) training in this paper . It is an off-policy RL method using the actor-critic framework . The soft Q-function for agent i is Qθi ( s t , ati ) parameterized by θi . For agent i , there are three types of parameters to learn in SAC : ( i ) the policy parameters φi ; ( ii ) a temperature τi ; ( iii ) the soft Q-function parameters θi . We can represent the policy optimization objective for agent i as , Jπ ( φi ) = Est∼D [ Eati∼πφi [ τi log πφi ( a t i|st ) −Qθi ( st , ati ) ] ] , ( 1 ) where τi is a learnable temperature coefficient for agent i , and D is the replay buffer . It can be learned to maintain the entropy level of the policy : J ( τi ) = Eati∼πφi [ −τi log πφi ( ati|st ) − τiH̄ ] , ( 2 ) where H̄ is a desired minimum expected entropy . The soft Q-function parameters θi for agent i can be trained by minimizing the soft Bellman residual as , JQ ( θi ) = E ( st , ati ) ∼D [ 1 2 ( Qθi ( s t , ati ) − Q̂ ( st , ati ) ) 2 ] , ( 3 ) Q̂ ( st , ati ) = Ri ( s t , ati ) + γE [ max at+1i ∼πφi Qθi ( s t+1 , at+1i ) ] . ( 4 ) Since we focus on collaborative robotics manipulation tasks , the reward is always shared and synchronized among the agents . That is , if one agent is able to finish a goal and obtain a reward , the other agents will receive the same reward . | This paper proposes an implicit regularization for bimanual manipulation that enforces two robot arms to focus on different regions, which prevents both arms from performing on the same object at the same time. The proposed method realizes this idea by computing attention between robots and objects and then constraining dot product between attentions of two arms to not overlap. This effectively prevents the conflict between two arms by encouraging two arms to focus on different objects (sub-tasks), leading to efficient RL training and safe behaviors. The empirical evaluation demonstrates that the proposed method is generalizable to unseen situations thanks to the attention mechanism with the regularization. | SP:7a40b403f19e24b0004ecc7a0bf1d9ba8ceae1d5 |
DAIR: Disentangled Attention Intrinsic Regularization for Safe and Efficient Bimanual Manipulation | 1 INTRODUCTION . Consider the bimanual robot manipulation tasks such as rearranging multiple objects to their target locations in Figure 1 ( a ) . This complex and compositional task is very challenging as the agents will first need to reduce it to several sub-tasks ( pushing or grasping each object ) , and then the two agents will need to figure out how to allocate each sub-task to each other ( which object each robot should operate on ) for better collaboration . Importantly , two robots should avoid collision in a narrow space for safety concerns . While training a single RL agent that can solve such compositional tasks has caught research attention recently ( Chang et al. , 2019 ; Peng et al. , 2019 ; Devin et al. , 2019 ; Jiang et al. , 2019 ; Li et al. , 2021 ; 2020 ) , there are still two main challenges that are barely touched when it comes to tackle bimanual manipulation : ( i ) domination , i.e. , one robot may tend to solve all the sub-tasks while the other robot remains idle , which hurts the task solving efficiency ; ( ii ) conflict , i.e. , two robots may try to solve the same sub-task simultaneously , which result in unsafe conflicts and interruptions on shared workspace . One possible solution is to design a task-allocation reward function to encourage better coordination . However , it is particularly non-trivial and often sub-optimal to manually design such a reward function for complex problems that contain a large continuous sub-task space , such as the rearrangement task in Figure 1 ( a ) . Moreover , even with the reward function described above in hand , it remains unclear how to reduce collisions , particularly for the tasks that require two robots to act simultaneously and safely . For example , in the task shown in Figure 1 ( d ) , one robot needs to push the green door to make space for the other robot to move the blue box to the goal position . However , these two robots can easily interrupt and collide with each other when they perform these coordination actions . We consider an alternative setting using sparse rewards without explicitly assigning sub-tasks to the robots . However , this leads to another challenge : How to encourage the robots to explore collaborative and safe behaviors with limited positive feedbacks ? For bimanual manipulation , an intrinsic motivation is introduced by Chitnis et al . ( 2020b ) , leveraging the difference between the actual effect of an action ( taken by two robots ) and the composition of individual predicted effect from each agent using a forward model . While this intrinsic reward encourages the two robots to collaborate for tasks that are hard to achieve by a single robot , it does not address the domination and conflict problems for efficient and safe manipulation . In this paper , we present DAIR : Disentangled Attention Intrinsic Regularization which encourages the two robots to safely and efficiently collaborate on different sub-tasks during bimanual manipulation . Instead of designing a new intrinsic reward function , we introduce a simple regularization term for representation learning , which encourages the robots to attend to different interaction regions . Specifically , we adopt the attention mechanism ( Vaswani et al. , 2017 ) in both our policy and value networks , where we compute the dot-product between each robot representation and the object interaction region representations to obtain a probability distribution . Each robot has its own probability distribution to represent which interaction region it is focusing on . We define our intrinsic regularization as minimizing the dot product between the two probability distributions between two robots ( i.e. , to be orthogonal ) in each time step . By adding this loss function , different robots will be regularized to attend to different interaction points within their policy representation . This forces the policies to tackle sub-tasks over disjoint working space without interfering with each other . We remark that disentangled attention can be generalized to environments with multiple agents . In our experiments , we focus on five diverse manipulation tasks in simulation environments with two Fetch robots as shown in Figure 1 . These tasks not only require the robots to manipulate multiple objects ( more than two , up to eight ) with each object offering one interaction region , but also a single heavy object with multiple interaction regions ( Figure 1 ( e ) ) . In our experiments , we show that our approach not only improves performance and sample efficiency in learning , but also helps avoiding the domination problem and largely reducing the conflicts for safe coordination between two robots . Moreover , the learned policies can also solve the task in fewer steps , which is the significance of bimanual cooperation compared to single-arm manipulation . We highlight our main contributions as : • Observation for two important problems ( domination and conflict ) in training RL agents for safe bimanual manipulation , and a new robotics task set with one to eight objects . • We propose DAIR , a novel and general intrinsic regularization . It not only improves the success rate in bimanual manipulation , solves the tasks more efficiently , but also reduces the conflicts between robots . This allows the robots to collaborate and coordinate more safely . 2 RELATED WORK . Intrinsic motivation in reinforcement learning . To train RL agents with sparse rewards , Schmidhuber ( 1991 ) first proposed to motivate the agent to reach state space giving a large model prediction error , which indicates the state is currently unexplored and unseen by the model . Such a mechanism is also called intrinsic motivation , which provides a reward for agents to explore what makes it curious ( Oudeyer et al. , 2007 ; Barto , 2013 ; Bellemare et al. , 2016 ; Ostrovski et al. , 2017 ; Huang et al. , 2019 ) . Recently , it has also been shown that the agents can explore with such an intrinsic motivation without extrinsic rewards ( Pathak et al. , 2017 ; Burda et al. , 2018 ; 2019 ) . Besides using prediction error , diverse skills can also be discovered by maximizing the mutual information between skills and states as the intrinsic motivation ( Eysenbach et al. , 2019 ; Sharma et al. , 2020 ) . While these approaches have achieved encouraging results in single agent cases , they are not directly applicable to environments with multiple agents . In our paper , we propose a novel intrinsic regularization for helping two robots work actively on different sub-tasks . Multi-agent collaboration . Cooperative multi-agent reinforcement learning has exhibited progress over the recent years ( Foerster et al. , 2016 ; He et al. , 2016 ; Peng et al. , 2017 ; Lowe et al. , 2017 ; Foerster et al. , 2018 ; Sunehag et al. , 2018 ; Rashid et al. , 2018 ; Son et al. , 2019 ; Wang et al. , 2020a ) . For example , Lowe et al . ( 2017 ) proposed to extend the DDPG ( Lillicrap et al. , 2016 ) algorithm to the multi-agent setting with decentralized policies and centralized Q functions , which implicitly encourages the agents to cooperate . However , the problem of exploration still remains as a bottleneck , and in fact even more severe in multi-agent RL . Motivated by the previous success on a single agent , intrinsic motivation is also introduced to help multiple agents explore and collaborate ( Foerster et al. , 2016 ; Strouse et al. , 2018 ; Hughes et al. , 2018 ; Iqbal & Sha , 2019b ; Jaques et al. , 2019 ; Wang et al. , 2020b ) . For example , Jaques et al . ( 2019 ) proposed to use social motivation to provide intrinsic rewards which model the influence of one agent on another agent ’ s decision making . The work that is most related to ours is by Chitnis et al . ( 2020b ) on the intrinsic motivation for synergistic behaviors , which encourages the robots to collaborate for a task that is hard to solve by a single robot . As this paper has not focused on the domination and conflict problems , our work on disentangled attention is a complementary technique to the previous work . Bimanual manipulation . The field of bimanual manipulation has been long studied as a problem involving both hardware design and control ( Raibert & Craig , 1981 ; Hsu , 1993 ; Xi et al. , 1996 ; Smith et al. , 2012 ) . In recent years , researchers applied learning based approach to bimanual manipulation using imitation learning from demonstrations ( Zollner et al . ; Gribovskaya & Billard , 2008 ; Tung et al. , 2020 ; Xie et al. , 2020 ) and reinforcement learning ( Kroemer et al. , 2015 ; Amadio et al. , 2019 ; Chitnis et al. , 2020a ; b ; Ha et al. , 2020 ) . For example , Amadio et al . ( 2019 ) proposed to leverage probabilistic movement primitives from human demonstrations . Chitnis et al . ( 2020a ) further introduced a highlevel planning policy to combine a set of parameterized primitives to solve complex manipulation tasks . In contrast to these works , our approach does not assume access to pre-defined primitives . Both robots will learn how to perform each sub-task and how to collaborate without conflicts in an end-to-end manner , which makes the approach more general . Attention mechanism . Our intrinsic motivation is built upon the attention mechanism which has been widely applied in natural language processing ( Vaswani et al. , 2017 ) and computer vision ( Wang et al. , 2018 ; Dosovitskiy et al. , 2021 ) . Recently , the attention mechanism is also utilized in multi-agent RL to model the communication and collaboration between agents ( Zambaldi et al. , 2018 ; Jiang & Lu , 2018 ; Malysheva et al. , 2018 ; Iqbal & Sha , 2019a ; Long et al. , 2020 ) . For example , Long et al . ( 2020 ) proposed to utilize attention to flexibly increase the number of agents and perform curriculum learning for large-scale multi-agent interactions . Li et al . ( 2020 ) adopt the attention mechanism to generalize multi-object stacking with a single arm . In our paper , instead of simply using attention for interaction among hand and a variable number of objects , we propose DAIR to encourage the agents to attend on different sub-tasks for better collaboration . 3 PRELIMINARIES . We consider a multi-agent Markov decision process ( MDP ) Littman ( 1994 ) with N agents , which can be represented by ( S , A , P , R , H , γ ) . The state s ∈ S and the action ai ∈ A for agent i are continuous . P ( st+1|st , at1 , ... , atN ) represents the stochastic transition dynamics . Ri ( st , ati ) represents the reward function for agent i. H is the horizon and γ is the discount factor . The policy πθi ( a t|st ) for agent i is parameterized by θi . The goal is to learn multi-agent policies maximizing the return . In this paper , we tackle a two-agent collaboration problem ( N = 2 ) , but our method can generalize to more agents . 3.1 REINFORCEMENT LEARNING WITH SOFT ACTOR-CRITIC . We adopt the Soft Actor-Critic ( SAC ) Haarnoja et al . ( 2018 ) for reinforcement learning ( RL ) training in this paper . It is an off-policy RL method using the actor-critic framework . The soft Q-function for agent i is Qθi ( s t , ati ) parameterized by θi . For agent i , there are three types of parameters to learn in SAC : ( i ) the policy parameters φi ; ( ii ) a temperature τi ; ( iii ) the soft Q-function parameters θi . We can represent the policy optimization objective for agent i as , Jπ ( φi ) = Est∼D [ Eati∼πφi [ τi log πφi ( a t i|st ) −Qθi ( st , ati ) ] ] , ( 1 ) where τi is a learnable temperature coefficient for agent i , and D is the replay buffer . It can be learned to maintain the entropy level of the policy : J ( τi ) = Eati∼πφi [ −τi log πφi ( ati|st ) − τiH̄ ] , ( 2 ) where H̄ is a desired minimum expected entropy . The soft Q-function parameters θi for agent i can be trained by minimizing the soft Bellman residual as , JQ ( θi ) = E ( st , ati ) ∼D [ 1 2 ( Qθi ( s t , ati ) − Q̂ ( st , ati ) ) 2 ] , ( 3 ) Q̂ ( st , ati ) = Ri ( s t , ati ) + γE [ max at+1i ∼πφi Qθi ( s t+1 , at+1i ) ] . ( 4 ) Since we focus on collaborative robotics manipulation tasks , the reward is always shared and synchronized among the agents . That is , if one agent is able to finish a goal and obtain a reward , the other agents will receive the same reward . | This paper proposes an attention-based solution to dual-arm robot manipulation from sparse rewards that relies on a novel idea for intrinsic regularisation. The proposed regularisation term encourages each robotic arm to focus on separate subtasks and objects. The proposed approach aims to reduce the problem of extracting a dominating agent in collaborative settings and to reduce the number of collisions between operating robots in a shared workspace. This work is evaluated in simulation and the obtained results demonstrate the ability of the proposed solution, DAIR, to not only improve both the success rate and sample efficiency of the learning process but also to reduce the number of conflicts between the two operating arms. The approach is interesting and the result seem promising but I have some concerns and additional questions that I detail below. | SP:7a40b403f19e24b0004ecc7a0bf1d9ba8ceae1d5 |
IntSGD: Adaptive Floatless Compression of Stochastic Gradients | 1 INTRODUCTION . Many recent breakthroughs in machine learning were made possible due to the introduction of large , sophisticated and high capacity supervised models whose training requires days or even weeks of computation ( Hinton et al. , 2015 ; He et al. , 2016 ; Huang et al. , 2017 ; Devlin et al. , 2018 ) . However , it would not be possible to train them without corresponding advances in parallel and distributed algorithms capable of taking advantage of modern hardware . Very large models are typically trained on vast collections of training data stored in a distributed fashion across a number of compute nodes that need to communicate throughout the training process . In this scenario , reliance on efficient communication protocols is of utmost importance . Communication in distributed systems . The training process of large models relies on fast synchronization of gradients computed in a parallel fashion . Formally , to train a model , we want to solve the problem of parallel/distributed minimization of the average of n functions : min x∈Rd [ f ( x ) def = 1n n∑ i=1 fi ( x ) ] , fi ( x ) def = Eξ [ fi ( x ; ξ ) ] , ( 1 ) where we will compute the gradients of stochastic realizations fi ( x ; ξ ) . The two dominating protocols for gradient synchronization are all-reduce and all-gather aggregation , which may use either Parameter Server or all-to-all communication under the hood . The core difference between them lies in that all-gather communicates all vectors , whereas all-reduce only outputs their average . As shown in previous works , current distributed deep learning algorithms predominantly use all-reduce as it scales much better than all-gather ( Vogels et al. , 2019 ; Agarwal et al. , 2021 ) . A popular way to reduce the communication cost of both all-reduce and all-gather primitives is to use lossy compression of gradients ( Ramesh et al. , 2021 ) . To study the benefit of lossy compression , large swaths of recent literature on distributed training attribute the cost of sending a single vector from a worker to the server to the number of bits needed to represent it . Based on this abstraction , various elaborate vector compression techniques ( see Table 1 in ( Beznosikov et al. , 2020 ; Xu et al. , 2020 ; Safaryan et al. , 2020 ) ) and algorithms have been designed for higher and higher compression ratios . However , in real systems , the efficiency of sending a vector is not fully characterized by the number of bits alone , because : • First , many compressors with high compression ratio ( e.g. , natural compression ( Horváth et al. , 2019 ) , quantization ( Alistarh et al. , 2017 ) , top-k sparsification , sign ( Bernstein et al. , 2018 ) ) are not compatible with the efficient all-reduce primitive and require all-gather implementation . • Secondly , some compressors rely on expensive operations such as low-rank decomposition ( Wang et al. , 2018 ; Vogels et al. , 2019 ) or bit-level operations ( Horváth et al. , 2019 ) , whose computation overhead may outweigh the benefits of reduced communication load . • Thirdly , algorithms with biased compressors such as Top-k SGD , SignSGD , PowerSGD ( Vogels et al. , 2019 ) , require the error-feedback ( EF-SGD ) mechanism ( Stich et al. , 2018 ; Karimireddy et al. , 2019 ) to ensure the convergence . Alas , error feedback needs extra sequences that may not fit the low memory budget of GPUs . Moreover , to the best of our knowledge , no convergence guarantee has been established for EF-SGD on the non-smooth objectives with multiple workers . SwitchML . Another approach to combating long communication times is to improve the hardware itself . The recently proposed SwitchML is an alternative to the NVIDIA Collective Communications Library ( NCCL ) on real-world hardware ( Sapio et al. , 2021 ) . The first key component of SwitchML is the in-network aggregation ( INA ) by a programmable switch . INA reduces the communication cost and latency because the execution can be paralleled and pipelined . To be specific , it splits the vector to aggregate into chunks and processes them individually by the switch pipeline . The advantages of INA over parameter server and all-reduce in terms of latency and communication cost have been theoretically and empirically justified by Sapio et al . ( 2021 ) . The second key component of SwitchML is stochastic gradient descent with integer rounding and aggregation . Instead of reducing the data volume to exchange , the goal of integer rounding in SwitchML is to fit the limited computation capability of the modern programmable switch , which only supports integer additions or logic operations . To increase the rounding precision , the gradient gki on device imultiplies a positive scaling factor αk known to every worker and then rounded to an integer number Int ( αk ◦ gki ) . As there is no additional scaling or decompression before aggregating the communicated vectors , their sums can be computed on the fly . Then , each worker can divide the aggregated gradient by nαk to update the model . However , Sapio et al . ( 2021 ) remark that the choice of the scaling factor αk requires special care . In their presentation1 , one of the authors notes : “ A bad choice of scaling factor can reduce the performance. ” To this end , they propose a heuristic-based profiling step that is executed before the gradient aggregation and keeps the rounded integers small to fit in 32 bits . We refer to their algorithm including the profiling step as Heuristic IntSGD . Unfortunately , no convergence guarantee for that algorithm has been established . This is where our theory comes to the rescue . By rigorously and exhaustively analyzing integer rounding based on scaling , we find adaptive rules for the scaling factor αk that do not require the profiling employed by Sapio et al . ( 2021 ) . As we will show in the remainder of the paper , our algorithm is perfectly suited for both in-network aggregation ( INA ) of SwitchML and for other efficient primitives such as all-reduce . 1.1 CONTRIBUTIONS . We summarize the key differences of our algorithm and prior work in Table 1 , and we also list our main contributions below . • Adaptive IntSGD . We develop a family of computationally cheap adaptive scaling factors for provably convergent IntSGD . It is a better alternative to the Heuristic IntSGD in ( Sapio et al. , 2021 ) that requires expensive operations and does not ensure convergence . • Rates . We obtain the first analysis of the integer rounding and aggregation for distributed machine learning . For all of the proposed variants , we prove convergence rates of IntSGD that match those of full-precision SGD up to constant factors . Our results are tight and apply to both convex and nonconvex problems . Our analysis does not require any extra assumption compared to those typically invoked for SGD . In contrast to other compression-based methods , IntSGD has the same rate as that of full-precision SGD even on non-smooth problems . • IntDIANA . We observe empirically that IntSGD struggles when the devices have heterogeneous ( non-identical ) data—an issue it shares with vanilla SGD—and propose an alternative method , IntDI- 1https : //youtu.be/gBPHFyBWVoM ? t=606 ANA , that can provably alleviate this issue . We also show that our tools are useful for extending the methodology beyond SGD methods , for example , to variance reduced methods ( Johnson & Zhang , 2013 ; Allen-Zhu & Hazan , 2016 ; Kovalev et al. , 2020 ; Gower et al. , 2020 ) with integer rounding . Please refer to Appendix A.2 for theoretical results and Appendix C.5 for the empirical verification . 2 ADAPTIVE INTEGER ROUNDING AND INTSGD . By randomized integer rounding we mean the mapping Int : R→ Z defined by Int ( t ) def= { [ t ] + 1 , with probability pt def = t− [ t ] , [ t ] , with probability 1− pt , where [ t ] denotes the floor of t ∈ R , i.e. , [ t ] = k ∈ Z , where k is such that k ≤ t < k+ 1 . Note that E [ Int ( t ) ] = ( t− [ t ] ) ( [ t ] + 1 ) + ( [ t ] + 1− t ) [ t ] = t. We extend this mapping to vectors x ∈ Rd by applying in element-wise : Int ( x ) i def = Int ( xi ) . 2.1 ADAPTIVE INTEGER ROUNDING . Given a scaling vector α ∈ Rd with nonzero entries , we further define the adaptive integer rounding operator Q : Rd → Rd by Q ( x ) def = 1α ◦ Int ( α ◦ x ) , ( 2 ) where a ◦ b def= ( a1b1 , . . . , adbd ) ∈ Rd denotes the Hadamard product of two vectors a = ( a1 , . . . , ad ) ∈ Rd and b = ( b1 , . . . , bd ) ∈ Rd . As we show below , the adaptive integer rounding operator ( 2 ) has several properties which will be useful in our analysis . In particular , the operator is unbiased , and its variance can be controlled by choice of a possibly random scaling vector α ∈ Rd++ . Lemma 1 . For any x ∈ Rd and α ∈ Rd++ , we have 1 α ◦ E [ Int ( α ◦ x ) ] = x , ( 3 ) E [ ∥∥ 1 α ◦ Int ( α ◦ x ) − x ∥∥2 ] ≤ d∑ j=1 1 4α2j , ( 4 ) Algorithm 1 IntSGD . Default setting for the tested problems : β = 0.9 , ε = 10−8 . 1 : Params : Stepsizes ηk , scaling vectors αk ∈ Rd 2 : Init : x0 ∈ Rd , x1 = x0 − η0 1n ∑n i=1 g 0 i 3 : for k = 1 , 2 , . . . do 4 : for each device i = 1 , 2 , . . . , n do 5 : Compute stochastic gradient gki ( E [ gki | xk ] ∈ ∂fi ( xk ) ) 6 : Maintain the moving average : rk = βrk−1 + ( 1− β ) ‖xk − xk−1‖2 { rk is a scalar } 7 : Compute the adaptive scaling factor : αk = √ d√ 2nrk/η2k+ε 2 8 : Scale and round the local gradient Q ( gki ) = Int ( αk ◦ gki ) 9 : end for 10 : Aggregate Q ( gki ) by either all-reduce or in-network aggregation ( INA ) 11 : for each device i = 1 , 2 , . . . , n do 12 : Compute the ( sub ) gradient estimator : g̃k = 1nαk ∑n i=1Q ( g k i ) 13 : Update the model parameter xk+1 = xk − ηkg̃k 14 : end for 15 : end for The expectations above are taken with respect to the randomness inherent in the rounding operator . 2.2 NEW ALGORITHM : INTSGD . We are ready to present our algorithm , IntSGD . At iteration k , each device i computes a stochastic ( sub ) gradient vector gki , i.e. , a vector satisfying E [ gki | xk ] ∈ ∂fi ( xk ) . ( 5 ) Prior to communication , each worker i rescales its stochastic ( sub ) gradients gki using the same vector αk ∈ Rd++ , and applies the randomized rounding operator Int . The resulting vectors Int ( αk ◦ gki ) are aggregated to obtain ∑n i=1 Int ( αk ◦ gki ) , which is also an integer . Each device subsequently performs division by n and inverse scaling to decode the message , obtaining the vector g̃k def = 1nαk ◦ n∑ i=1 Int ( αk ◦ gki ) = 1n n∑ i=1 1 αk ◦ Int ( αk ◦ gki ) ( 2 ) = 1n n∑ i=1 Q ( gki ) . Here αk is a random adaptive scaling factor calculated based on the historical information . We left the design of αk to Section 4 . By combining ( 5 ) and ( 3 ) , we observe that gk is a stochastic ( sub ) gradient of f at xk . Finally , all devices perform in parallel an SGD-type step of the form xk+1 = xk−ηkg̃k and the process is repeated . Our IntSGD method is formally stated as Algorithm 1 with the suggested rule of α . Relation to QSGD ( Alistarh et al. , 2017 ) . QSGD bears some similarity to the IntSGD : Both of them scale gki by a factor before the quantization ( the scaling factor in QSGD is 1/‖gki ‖ for normalization ) . However , some key difference makes the communication efficiency of IntSGD much better than that of QSGD . It is worth noting that the normalization factors 1/‖gki ‖ in QSGD are different for various workers . Then , the quantized values of various workers need to be gathered and decompressed before aggregation . On the contrary , the scaling factor αk in our IntSGD is the same for all workers such that the sum of integers can be computed on the fly . Thus , IntSGD supports the efficient allreduce primitive while QSGD does not . As seen in the experimental results in Section 5 , this makes a big difference in empirical performance . Moreover , the proof technique for IntSGD is also intrinsically different from that of QSGD . Please see the next section for the details . | The paper introduces a randomized compress-to-integer operator with a shared scaling factor for use in data communication in distributed SGD. The resulting algorithm is provably convergent, matches the behavior of SGD up to constant factors, and works well with the all-reduce primitive. The authors claim three main contributions: the IntSGD algorithm itself, the analysis of convergence rates, and the IntDIANA variant for heterogeneous data distributions (though this is only discussed in the appendices). | SP:ff0683b5929993e2f909081930bc30353a7a4d55 |
IntSGD: Adaptive Floatless Compression of Stochastic Gradients | 1 INTRODUCTION . Many recent breakthroughs in machine learning were made possible due to the introduction of large , sophisticated and high capacity supervised models whose training requires days or even weeks of computation ( Hinton et al. , 2015 ; He et al. , 2016 ; Huang et al. , 2017 ; Devlin et al. , 2018 ) . However , it would not be possible to train them without corresponding advances in parallel and distributed algorithms capable of taking advantage of modern hardware . Very large models are typically trained on vast collections of training data stored in a distributed fashion across a number of compute nodes that need to communicate throughout the training process . In this scenario , reliance on efficient communication protocols is of utmost importance . Communication in distributed systems . The training process of large models relies on fast synchronization of gradients computed in a parallel fashion . Formally , to train a model , we want to solve the problem of parallel/distributed minimization of the average of n functions : min x∈Rd [ f ( x ) def = 1n n∑ i=1 fi ( x ) ] , fi ( x ) def = Eξ [ fi ( x ; ξ ) ] , ( 1 ) where we will compute the gradients of stochastic realizations fi ( x ; ξ ) . The two dominating protocols for gradient synchronization are all-reduce and all-gather aggregation , which may use either Parameter Server or all-to-all communication under the hood . The core difference between them lies in that all-gather communicates all vectors , whereas all-reduce only outputs their average . As shown in previous works , current distributed deep learning algorithms predominantly use all-reduce as it scales much better than all-gather ( Vogels et al. , 2019 ; Agarwal et al. , 2021 ) . A popular way to reduce the communication cost of both all-reduce and all-gather primitives is to use lossy compression of gradients ( Ramesh et al. , 2021 ) . To study the benefit of lossy compression , large swaths of recent literature on distributed training attribute the cost of sending a single vector from a worker to the server to the number of bits needed to represent it . Based on this abstraction , various elaborate vector compression techniques ( see Table 1 in ( Beznosikov et al. , 2020 ; Xu et al. , 2020 ; Safaryan et al. , 2020 ) ) and algorithms have been designed for higher and higher compression ratios . However , in real systems , the efficiency of sending a vector is not fully characterized by the number of bits alone , because : • First , many compressors with high compression ratio ( e.g. , natural compression ( Horváth et al. , 2019 ) , quantization ( Alistarh et al. , 2017 ) , top-k sparsification , sign ( Bernstein et al. , 2018 ) ) are not compatible with the efficient all-reduce primitive and require all-gather implementation . • Secondly , some compressors rely on expensive operations such as low-rank decomposition ( Wang et al. , 2018 ; Vogels et al. , 2019 ) or bit-level operations ( Horváth et al. , 2019 ) , whose computation overhead may outweigh the benefits of reduced communication load . • Thirdly , algorithms with biased compressors such as Top-k SGD , SignSGD , PowerSGD ( Vogels et al. , 2019 ) , require the error-feedback ( EF-SGD ) mechanism ( Stich et al. , 2018 ; Karimireddy et al. , 2019 ) to ensure the convergence . Alas , error feedback needs extra sequences that may not fit the low memory budget of GPUs . Moreover , to the best of our knowledge , no convergence guarantee has been established for EF-SGD on the non-smooth objectives with multiple workers . SwitchML . Another approach to combating long communication times is to improve the hardware itself . The recently proposed SwitchML is an alternative to the NVIDIA Collective Communications Library ( NCCL ) on real-world hardware ( Sapio et al. , 2021 ) . The first key component of SwitchML is the in-network aggregation ( INA ) by a programmable switch . INA reduces the communication cost and latency because the execution can be paralleled and pipelined . To be specific , it splits the vector to aggregate into chunks and processes them individually by the switch pipeline . The advantages of INA over parameter server and all-reduce in terms of latency and communication cost have been theoretically and empirically justified by Sapio et al . ( 2021 ) . The second key component of SwitchML is stochastic gradient descent with integer rounding and aggregation . Instead of reducing the data volume to exchange , the goal of integer rounding in SwitchML is to fit the limited computation capability of the modern programmable switch , which only supports integer additions or logic operations . To increase the rounding precision , the gradient gki on device imultiplies a positive scaling factor αk known to every worker and then rounded to an integer number Int ( αk ◦ gki ) . As there is no additional scaling or decompression before aggregating the communicated vectors , their sums can be computed on the fly . Then , each worker can divide the aggregated gradient by nαk to update the model . However , Sapio et al . ( 2021 ) remark that the choice of the scaling factor αk requires special care . In their presentation1 , one of the authors notes : “ A bad choice of scaling factor can reduce the performance. ” To this end , they propose a heuristic-based profiling step that is executed before the gradient aggregation and keeps the rounded integers small to fit in 32 bits . We refer to their algorithm including the profiling step as Heuristic IntSGD . Unfortunately , no convergence guarantee for that algorithm has been established . This is where our theory comes to the rescue . By rigorously and exhaustively analyzing integer rounding based on scaling , we find adaptive rules for the scaling factor αk that do not require the profiling employed by Sapio et al . ( 2021 ) . As we will show in the remainder of the paper , our algorithm is perfectly suited for both in-network aggregation ( INA ) of SwitchML and for other efficient primitives such as all-reduce . 1.1 CONTRIBUTIONS . We summarize the key differences of our algorithm and prior work in Table 1 , and we also list our main contributions below . • Adaptive IntSGD . We develop a family of computationally cheap adaptive scaling factors for provably convergent IntSGD . It is a better alternative to the Heuristic IntSGD in ( Sapio et al. , 2021 ) that requires expensive operations and does not ensure convergence . • Rates . We obtain the first analysis of the integer rounding and aggregation for distributed machine learning . For all of the proposed variants , we prove convergence rates of IntSGD that match those of full-precision SGD up to constant factors . Our results are tight and apply to both convex and nonconvex problems . Our analysis does not require any extra assumption compared to those typically invoked for SGD . In contrast to other compression-based methods , IntSGD has the same rate as that of full-precision SGD even on non-smooth problems . • IntDIANA . We observe empirically that IntSGD struggles when the devices have heterogeneous ( non-identical ) data—an issue it shares with vanilla SGD—and propose an alternative method , IntDI- 1https : //youtu.be/gBPHFyBWVoM ? t=606 ANA , that can provably alleviate this issue . We also show that our tools are useful for extending the methodology beyond SGD methods , for example , to variance reduced methods ( Johnson & Zhang , 2013 ; Allen-Zhu & Hazan , 2016 ; Kovalev et al. , 2020 ; Gower et al. , 2020 ) with integer rounding . Please refer to Appendix A.2 for theoretical results and Appendix C.5 for the empirical verification . 2 ADAPTIVE INTEGER ROUNDING AND INTSGD . By randomized integer rounding we mean the mapping Int : R→ Z defined by Int ( t ) def= { [ t ] + 1 , with probability pt def = t− [ t ] , [ t ] , with probability 1− pt , where [ t ] denotes the floor of t ∈ R , i.e. , [ t ] = k ∈ Z , where k is such that k ≤ t < k+ 1 . Note that E [ Int ( t ) ] = ( t− [ t ] ) ( [ t ] + 1 ) + ( [ t ] + 1− t ) [ t ] = t. We extend this mapping to vectors x ∈ Rd by applying in element-wise : Int ( x ) i def = Int ( xi ) . 2.1 ADAPTIVE INTEGER ROUNDING . Given a scaling vector α ∈ Rd with nonzero entries , we further define the adaptive integer rounding operator Q : Rd → Rd by Q ( x ) def = 1α ◦ Int ( α ◦ x ) , ( 2 ) where a ◦ b def= ( a1b1 , . . . , adbd ) ∈ Rd denotes the Hadamard product of two vectors a = ( a1 , . . . , ad ) ∈ Rd and b = ( b1 , . . . , bd ) ∈ Rd . As we show below , the adaptive integer rounding operator ( 2 ) has several properties which will be useful in our analysis . In particular , the operator is unbiased , and its variance can be controlled by choice of a possibly random scaling vector α ∈ Rd++ . Lemma 1 . For any x ∈ Rd and α ∈ Rd++ , we have 1 α ◦ E [ Int ( α ◦ x ) ] = x , ( 3 ) E [ ∥∥ 1 α ◦ Int ( α ◦ x ) − x ∥∥2 ] ≤ d∑ j=1 1 4α2j , ( 4 ) Algorithm 1 IntSGD . Default setting for the tested problems : β = 0.9 , ε = 10−8 . 1 : Params : Stepsizes ηk , scaling vectors αk ∈ Rd 2 : Init : x0 ∈ Rd , x1 = x0 − η0 1n ∑n i=1 g 0 i 3 : for k = 1 , 2 , . . . do 4 : for each device i = 1 , 2 , . . . , n do 5 : Compute stochastic gradient gki ( E [ gki | xk ] ∈ ∂fi ( xk ) ) 6 : Maintain the moving average : rk = βrk−1 + ( 1− β ) ‖xk − xk−1‖2 { rk is a scalar } 7 : Compute the adaptive scaling factor : αk = √ d√ 2nrk/η2k+ε 2 8 : Scale and round the local gradient Q ( gki ) = Int ( αk ◦ gki ) 9 : end for 10 : Aggregate Q ( gki ) by either all-reduce or in-network aggregation ( INA ) 11 : for each device i = 1 , 2 , . . . , n do 12 : Compute the ( sub ) gradient estimator : g̃k = 1nαk ∑n i=1Q ( g k i ) 13 : Update the model parameter xk+1 = xk − ηkg̃k 14 : end for 15 : end for The expectations above are taken with respect to the randomness inherent in the rounding operator . 2.2 NEW ALGORITHM : INTSGD . We are ready to present our algorithm , IntSGD . At iteration k , each device i computes a stochastic ( sub ) gradient vector gki , i.e. , a vector satisfying E [ gki | xk ] ∈ ∂fi ( xk ) . ( 5 ) Prior to communication , each worker i rescales its stochastic ( sub ) gradients gki using the same vector αk ∈ Rd++ , and applies the randomized rounding operator Int . The resulting vectors Int ( αk ◦ gki ) are aggregated to obtain ∑n i=1 Int ( αk ◦ gki ) , which is also an integer . Each device subsequently performs division by n and inverse scaling to decode the message , obtaining the vector g̃k def = 1nαk ◦ n∑ i=1 Int ( αk ◦ gki ) = 1n n∑ i=1 1 αk ◦ Int ( αk ◦ gki ) ( 2 ) = 1n n∑ i=1 Q ( gki ) . Here αk is a random adaptive scaling factor calculated based on the historical information . We left the design of αk to Section 4 . By combining ( 5 ) and ( 3 ) , we observe that gk is a stochastic ( sub ) gradient of f at xk . Finally , all devices perform in parallel an SGD-type step of the form xk+1 = xk−ηkg̃k and the process is repeated . Our IntSGD method is formally stated as Algorithm 1 with the suggested rule of α . Relation to QSGD ( Alistarh et al. , 2017 ) . QSGD bears some similarity to the IntSGD : Both of them scale gki by a factor before the quantization ( the scaling factor in QSGD is 1/‖gki ‖ for normalization ) . However , some key difference makes the communication efficiency of IntSGD much better than that of QSGD . It is worth noting that the normalization factors 1/‖gki ‖ in QSGD are different for various workers . Then , the quantized values of various workers need to be gathered and decompressed before aggregation . On the contrary , the scaling factor αk in our IntSGD is the same for all workers such that the sum of integers can be computed on the fly . Thus , IntSGD supports the efficient allreduce primitive while QSGD does not . As seen in the experimental results in Section 5 , this makes a big difference in empirical performance . Moreover , the proof technique for IntSGD is also intrinsically different from that of QSGD . Please see the next section for the details . | The authors propose a quantized parallel SGD where the gradient coordinates are rounded after scaling: $Q_\alpha(g) = $ round$(\alpha g )/\alpha.$ The scaling factor $\alpha$ determines the quantization error: higher the $\alpha$, lower the quantization error. The authors propose a clever $\alpha$ shared across workers. Such sharing of $\alpha$ allows for in-network aggregation of gradient updates using programmable switches. It also allows for an all-reduce model of parallel SGD which can be much more efficient than the all-gather model. A previous version Heuristic IntSGD uses a scaling factor that is more intuitive (uses all available bits binning the known range), but interestingly its accuracy is suboptimal on some tasks. Other integer rounding methods such as QSGD use a scale $\alpha$ that is not the same across workers and hence it does not allow the all-reduce primitive. | SP:ff0683b5929993e2f909081930bc30353a7a4d55 |
VAE Approximation Error: ELBO and Exponential Families | 1 INTRODUCTION . Variational autoencoders proposed by Kingma & Welling ( 2014 ) strive at learning complex data distributions pd ( x ) , x ∈ X in a generative way . They introduce latent variables z ∈ Z and model the joint distribution as pθ ( x | z ) p ( z ) , where p ( z ) is a simple distribution that is usually assumed to be known . The conditional distribution pθ ( x | z ) , called decoder , is modeled in terms of a deep network parametrized by θ ∈ Θ . Models defined in this way allow to sample from pθ ( x ) = Ep ( z ) pθ ( x | z ) easily , however at the price that computing the posterior pθ ( z |x ) = pθ ( x | z ) p ( z ) /pθ ( x ) is usually intractable . To handle this problem , VAE approximates the posterior pθ ( z |x ) by an amortized inference encoder model qφ ( z |x ) parametrized by φ ∈ Φ . Given the empirical data distribution pd ( x ) , the model is learned by maximizing the evidence lower bound ( ELBO ) of the data loglikelihood L ( θ ) = Epd log pθ ( x ) . It can be expressed in the following two equivalent forms : LB ( θ , φ ) = Epd [ Eqφ log pθ ( x | z ) −DKL ( qφ ( z |x ) ‖ p ( z ) ) ] ( 1a ) = L ( θ ) − Epd [ DKL ( qφ ( z |x ) ‖ pθ ( z |x ) ) ] . ( 1b ) The first form allows for stochastic optimization of ELBO while the second form shows that the gap between log-likelihood and ELBO is exactly the mismatch between the encoder and the posterior . VAEs constitute a powerful deep learning extension of the expectation maximization approach to handle latent variables . They are useful not only as generative models but also , e.g. , in semisupervised learning ( Kingma et al. , 2014 ; Mattei & Frellsen , 2019 ) . Furthermore the encoder part constructs an efficient embedding of the data in the latent space , useful in many applications . The outreach of the VAE approach requires therefore a careful empirical and theoretical analysis of the problems and trade offs involved . The most important ones are ( i ) posterior collapse ( He et al. , 2019 ; Lucas et al. , 2019 ; Dai et al. , 2018 ; Dai & Wipf , 2019 ) and ( ii ) approximation errors caused by an inappropriate choice of the encoder family . The VAE approximation error has been studied ( e.g. , Cremer et al . 2018 ; Hjelm et al . 2016 ; Kim et al . 2018 ) so far mainly empirically . The problem also occurs and is well-recognized in the context of variational inference and variational Bayesian inference , where the target posterior distribution is expected to be complex . It is commonly understood , that the mean field approximation of pθ ( z |x ) by qφ ( z |x ) in ( 1b ) significantly limits variational Bayesian inference . In contrast , in VAEs , the decoder may adopt to compensate for the chosen encoder family . The effect of this coupling , we believe , is not fully understood . The phenomenon of decoder adopting to the posterior was experimentally observed , e.g. , by Cremer et al . ( 2018 , Section 5.4 ) , noting that the approximation error is often dominated by the amortization error . Turner & Sahani ( 2011 , Sec . 1.4 ) analytically show for linear state space models that simpler variational approximations ( such a mean-field ) can lead to less bias in parameter estimation than more complicated structured approximations . Similarly , Shu et al . ( 2018 ) view the VAE objective as providing a regularization and show that making the amortized inference model smoother , while increasing the amortization gap , leads to a better generalization . The common ( empirical ) understanding of the importance of the gap between the approximate and the true posterior has led to many generalizations of standard VAEs , which achieve impressive practical results , notably , tighter bounds using importance weighting ( Burda et al. , 2016 ; Nowozin , 2018 ) , encoders employing normalizing flows ( Rezende & Mohamed , 2015 ; Kingma et al. , 2016 ) , hierarchical and autoregressive encoders ( Vahdat & Kautz , 2020 ; Sønderby et al. , 2016 ; Ranganath et al. , 2016 ) , MRF encoders ( Vahdat et al. , 2020 ) and more . While these extensions mitigate the posterior mismatch problem , they often come at a price of a more difficult training and more expensive inference . Furthermore , simpler encoders may be of practical interest . Burda et al . ( 2016 , Appendix C ) illustrates that IWAE approximate posteriors are less regular and more spread out . In contrast , factorized encoders provide simple embeddings useful for downstream tasks such as semantic hashing ( Chaidaroon & Fang , 2017 ) . The aim of this paper is to study the approximation error of VAEs and its impact on the learned decoder . We consider a setting that generalizes many common VAEs , in particular popular models where encoder and decoder are conditionally independent Bernoulli or Gaussian distributions : we assume that both decoder and encoder are conditional exponential families . We identify the subclass of generative models where the encoder can model the posterior exactly , referred to as consistent VAEs . We give a characterization of consistent VAEs revealing that this set in fact does not depend on the complexity of the involved neural networks . We further show that the ELBO optimizer is pulled towards this set away from the likelihood optimizer . Specializing the characterization to several common VAE models , we show that the respective consistent models turn out to be RBMlike in many cases . We experimentally investigate the detrimental effect in one case and show that a simpler but more consistent VAE can perform better in the other . 2 PROBLEM STATEMENT . We adopt the following notion of approximation error . Consider a generative model class PΘ = { pθ ( x , z ) | θ∈Θ } , the encoder class QΦ = { qφ ( z |x ) | φ∈Φ } and the data distribution pd ( x ) . The maximum likelihood generative model is given by θML ∈ argmaxθ∈Θ Epd ( x ) log pθ ( x ) . For a decoder with parameters θ we define its approximation error as the likelihood difference L ( θML ) − L ( θ ) . Respectively , the VAE approximation error is defined for a given θ as : L ( θML ) −maxφ LB ( θ , φ ) ≥ L ( θML ) − L ( θ ) . ( 2 ) In order for this error to become zero , two conditions are necessary and sufficient : • Parameters ( θ , φ ) must be optimal for the ELBO objective . • ELBO must be tight at ( θ , φ ) , i.e. , LB ( θ , φ ) = L ( θ ) . Assuming that the optimality can be achieved , we study the non-tightness gap L ( θ ) − LB ( θ , φ ) . From ( 1b ) it expresses as Epd [ DKL ( qφ ( z |x ) ‖ pθ ( z |x ) ) ] . It follows that ELBO is tight at ( θ , φ ) iff qφ ( z |x ) ≡ pθ ( z |x ) . Hence , we define the consistent set ΘΦ ⊆ Θ as the subset of distributions pθ ( x , z ) whose posteriors are in QΦ , i.e. , ΘΦ = { θ ∈ Θ ∣∣ ∃φ ∈ Φ : qφ ( z |x ) ≡ pθ ( z |x ) } . ( 3 ) The KL-divergence in the ELBO objective ( 1b ) can vanish only if θ ∈ ΘΦ . If the likelihood maximizer θML is not contained in ΘΦ , then this KL-divergence pulls the optimizer towards ΘΦ and away from θML as illustrated in Fig . 1 . We characterize the consistent set ΘΦ , on which the bound is tight , and show that this set is quite narrow and does not depend on the complexity of the encoder and decoder networks beyond simple 1-layer linear mappings of sufficient statistics . 3 THEORETICAL ANALYSIS . We consider a general class of VAEs , where both encoder and decoder are defined as exponential families . This class includes many common models , in particular Gaussian VAEs and Bernoulli VAEs with conditional independence assumptions , but also more complex ones , e.g. , where the encoder is a conditional random field ( Vahdat et al. , 2020 ) 1 . Assumption 1 ( Exponential family VAE ) . LetX andZ be sets of observations and latent variables , respectively . We consider VAE models defined by pθ ( x | z ) = h ( x ) exp [ 〈ν ( x ) , fθ ( z ) 〉 −A ( fθ ( z ) ) ] ( 4a ) qφ ( z |x ) = h′ ( z ) exp [ 〈ψ ( z ) , gφ ( x ) 〉 −B ( gφ ( x ) ) ] , ( 4b ) where ν : X → Rn and ψ : Z → Rm are fixed sufficient statistics of dimensionality n and m ; fθ : Z → Rn and gφ : X → Rm are the the decoder , resp. , encoder networks , depending on learnable parameters θ , resp . φ ; h : X → R+ , h′ : Z → R+ are strictly positive base measures and A , B denote the respective log-partition functions . Notice that this assumption imposes no restrictions on the nature of random variables x and z . They can be discrete or continuous , univariate or multivariate . Similarly , it imposes no restrictions on the complexity of the decoder and encoder networks fθ ( z ) and gφ ( x ) . Characterization of the consistent set . In the first step of our analysis , we investigate the conditions under which the approximation error of an exponential family VAE can be made exactly zero . As discussed above , a tight VAE ( θ , φ ) must satisfy ∀ ( x , z ) qφ ( z |x ) = pθ ( z |x ) , which leads to the following theorem . Theorem 1 . The consistent set ΘΦ of an exponential family VAE is given by decoders of the form p ( x | z ) = h ( x ) exp [ 〈ν ( x ) , Wψ ( z ) 〉+ 〈ν ( x ) , u〉 −A ( z ) ] , ( 5 ) where W is a n×m matrix and u ∈ Rn . Moreover , the corresponding encoders have the form q ( z |x ) = h′ ( z ) exp [ 〈 ψ ( z ) , WT ν ( x ) 〉 + 〈ψ ( z ) , v〉 −B ( x ) ] , ( 6 ) where v ∈ Rm . This is a direct consequence of a theorem by Arnold & Strauss ( 1991 ) ( see Appendix A.1 for more details ) . For a tight VAE , Theorem 1 states that the decoder and encoder take the form ( 5 ) and ( 6 ) with the interaction between x and z parametrized by a single matrix W instead of the complex neural networks with parameters θ , φ . Thus they turn out to be generalized linear models ( GLMs ) . The corresponding joint probability distribution is an EF Harmonium ( Welling et al. , 2005 ) : p ( x , z ) = h ( x ) h′ ( z ) exp ( 〈ν ( x ) , Wψ ( z ) 〉+ 〈ν ( x ) , u〉+ 〈ψ ( z ) , v〉 −A ) . ( 7 ) 1Notice , however , that this class does not include VAEs with advanced encoder families like normalizing flows , hierarchical and autoregressive encoders . Corollary 1 . The subset ΘΦ of consistent models can not be enlarged by considering more complex encoder networks g ( x ) , provided that the affine family WTν ( x ) can already be represented . Corollary 2 . Let the decoder network family be affine in ψ ( z ) , i.e. , f ( z ) = Wψ ( z ) + a and let the encoder network family g ( x ) include at least all affine maps V ν ( x ) + b . Then any global optimum of ELBO attains a zero approximation error . VAEs can escape consistency when they degenerate to an invertible flow . In practice , VAE models are almost never tight . It is therefore natural to ask , whether a small VAE posterior mismatch error implies closeness of the optimal decoder to some decoder in the consistent set . Definition 1 . A VAE ( pθ , qφ ) is ε-tight for some ε > 0 if Epd ( x ) [ DKL ( qφ ( z |x ) ‖ pθ ( z |x ) ) ] ≤ ε . It turns out that the the KL divergence has a leak , which allows a VAE to approach tightness while not approaching consistency . In the continuous case an example satisfying ε-tightness with non-linear decoder follows from Dai & Wipf ( 2019 , Theorem 2 ) . They show for a class of Gaussian VAEs with general neural networks fθ , gφ , that it is possible to build a sequence of network parameters θt , φt with the following properties : i ) the target distribution is approximated arbitrary well , ii ) the posterior mismatch DKL ( qφt ( z |x ) ‖ pθt ( z |x ) ) approaches zero and iii ) both the encoder and decoder approach deterministic mappings . The VAE thus approaches a flow model ( or invertible neural network ) between the data manifold and a subspace of the latent space ( Dai & Wipf , 2019 ) . Clearly , in a general case the flow must be non-linear . A similar case can be made for discrete variables , see Example A.1 . Non-deterministic nearly-tight VAEs approach consistency . We would however argue that the mode where the decoder and encoder are nearly-deterministic is not a natural VAE solution . By making additional assumptions , excluding such deterministic solutions , and restricting ourselves to the finite space in order to simplify the analysis , we can show that the decoder of an ε-tight VAE does indeed approach a GLM . Theorem 2 . Let ( pθ , qφ ) be an exponential family VAE ( Assumption 1 ) on a discrete space X × Z with strictly positive encoder qφ ( z |x ) and decoder posterior pθ ( z |x ) , both bounded from below by α > 0 . If the VAE is ε-tight , then there exists W ∈ Rn , m such that the decoder can be approximated by a GLM of the form p̃ ( x | z ) = h̃ ( x ) exp [ 〈ν ( x ) , Wψ ( z ) 〉 − Ã ( z ) ] with the error Epd ( x ) [ ( log pθ ( x | z ) − log p̃ ( x | z ) ) 2 ] ≤ ε2α2 + o ( ε ) ∀z ∈ Z . ( 8 ) The proof is given in Appendix A.3 . | Summary. This paper presents an analysis of the approximation error of VAE models when the encoder and decoder are from exponential families. They show that when the model is consistent (i.e., the encoder is able to match the posterior), the encoder and decoder distributions are exponential family distributions that are linearly parameterized. The paper also shows that barring pathological cases, when the model is tight the decoder distribution is close to a linearly parameterized exponential family distribution. In particular, this shows that in this case additional depth is not useful. Despite the restriction to exponential family distributions, the paper shows a number of real instances of models that lie in this class including Gaussian VAEs and VAEs with RBM encoders. For Bernoulli VAEs for semantic hashing, the paper proposes and verifies improvements to a model presented in the literature. | SP:2d237edc34601e158d7ed48ecc72bc873ae5f4dd |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.